Problem with interpolation between domains in mixed-dimensional branch

Hi,

I have problem with interpolation in mixed-dimensional branch, that occurs only in parallel. Namely, when I try to interpolate function defined on one subdomain to the whole domain, the program crashes with segmentation fault error. I have tried to design MWE that will illustrate the problem.

from dolfin import *
parameters["ghost_mode"] = "shared_facet" #shared_facet | shared_vertex

level = 10 #False | 10 | 13 | 16 | 20 | 30| 40 | 50
set_log_level(level)

mesh = RectangleMesh(Point(0.0, 0.0),Point(1, 1), 50, 50, "crossed")

marker = MeshFunction("size_t", mesh, mesh.topology().dim(), 0)
for c in cells(mesh):
    marker[c] = (c.midpoint().y() - 0.5) < 1e-9

mesh1 = mesh
mesh2 = MeshView.create(marker, 1)

dx1 = Measure("dx", domain = mesh1)
dx2 = Measure("dx", domain = mesh2)

VP = FunctionSpace(mesh1, "Lagrange", 1)
V = FunctionSpace(mesh2, "Lagrange", 1)
ME = MixedFunctionSpace(V, V, VP)

u = Function(ME)
v = TestFunctions(ME)

u.sub(0).assign(interpolate(Expression('10*x[1]', degree = 1), V))
u.sub(1).assign(interpolate(Expression('50*x[1]', degree = 1), V))

rho = project(u.sub(0)-u.sub(1), V)
rho.set_allow_extrapolation(True)

u.sub(2).assign(interpolate(Expression('x[1] < 0.5 ? rho : 0', rho = rho, degree = 1), VP))

## Same happens if I use another method
# Avvp = PETScDMCollection.create_transfer_matrix(V, VP)
# u.sub(2).vector()[:] = Avvp*rho.vector()

The following error occurs:

===================================================================================
=   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
=   PID 1228 RUNNING AT 096af297fa64
=   EXIT CODE: 139
=   CLEANING UP REMAINING PROCESSES
=   YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
YOUR APPLICATION TERMINATED WITH THE EXIT STRING: Segmentation fault (signal 11)
This typically refers to a problem with your application.
Please see the FAQ page for debugging suggestions

The code is executed on computing server with Xeon processor on Xubuntu 18.04.2 using docker and the latest developer version.

Another thing that I noticed for slightly different example, though it might not be related, is that sometimes nonlinear solver fails when executed in parallel (due to reaching maximum number of iterations), while it works as expected when executed in serial. In this case as well, I use similar interpolation between meshes.

What can be the reason for this kind of behavior and how to avoid it?