Problem with interpolating and mixed function over a submesh in parallel

Hi,

I am trying to initialize a function in a mixed space. The mixed space is defined over a submesh and the code runs in parallel. I have created the minimal code here but since I needed to use MeshView I couldn’t use Fenics in-built meshes. So I added a link to my github with the mesh file I have used here.

from fenics import *
import numpy as np

mesh = Mesh()
hdf5 = HDF5File(mesh.mpi_comm(), "Mesh.h5", "r")
hdf5.read(mesh, "/mesh", False)
Volume = MeshFunction("size_t",mesh,2)
hdf5.read(Volume, "/Volume")
bnd_mesh = MeshFunction("size_t",mesh,1)
hdf5.read(bnd_mesh, "/bnd_mesh")


Sub_Mesh0 = MeshView.create(Volume, 3)
surface_marker0 = MeshFunction("size_t", Sub_Mesh0, Sub_Mesh0.topology().dim() - 1, 0)
volume_marker0 = MeshFunction("size_t", Sub_Mesh0, Sub_Mesh0.topology().dim())

P1 = FiniteElement('P', triangle,1)
element = MixedElement([P1,P1,P1])
Mixed_Space = FunctionSpace(Sub_Mesh0, element)
U = Function(Mixed_Space) 
u1,u2,u3 = split(U)

ox_0 = Expression('x[0]', degree=0)
Tn_0 = Expression('x[1]', degree=0)
Th_0 = Expression('x[0]*x[1]', degree=0)

ox0 = interpolate(ox_0, Mixed_Space.sub(0).collapse())
Tn0 = interpolate(Tn_0, Mixed_Space.sub(1).collapse())
Th0 = interpolate(Th_0, Mixed_Space.sub(2).collapse())

assign(U.sub(0),ox0)
assign(U.sub(1),Tn0)
assign(U.sub(2),Th0) 

and the error is

Traceback (most recent call last):
  File "/home/nm24a/Test/untitled2.py", line 24, in <module>
    ox0 = interpolate(ox_0, Mixed_Space.sub(0).collapse())
  File "/usr/local/lib/python3.5/dist-packages/dolfin/function/functionspace.py", line 124, in sub
    return FunctionSpace(cpp.function.FunctionSpace.sub(self._cpp_object, i))
RuntimeError: 

*** -------------------------------------------------------------------------
*** DOLFIN encountered an error. If you are not able to resolve this issue
*** using the information listed below, you can ask for help at
***
***     fenics-support@googlegroups.com
***
*** Remember to include the error message listed below and, if possible,
*** include a *minimal* running example to reproduce the error.
***
*** -------------------------------------------------------------------------
*** Error:   Unable to build sub-dofmap view.
*** Reason:  Re-ordering map not available. It may be been cleared by the user.
*** Where:   This error was encountered inside DofMapBuilder.cpp.
*** Process: 1
*** 
*** DOLFIN version: 2019.2.0.dev0
*** Git changeset:  946dbd3e268dc20c64778eb5b734941ca5c343e5

You can find the mesh file here.
Any help would be much appreciated.

Best,

More observations: I realize the problem is with Mixed_Space.sub(0), more specifically the .sub(0) part. Also if I want to do something like this:
u1_, u2_, u3_ = U.split()
I get the same error. Is this a bug?

I cannot reproduce this using the git changeset, or the latest docker image.
Please check that you are using the latest version of FFC.

Hi,
As I didn’t succeed in opening your mesh file just by putting it next to my code I change some stuff.
If you are sure of how you build your mesh then look at this:

mesh = RectangleMesh(Point(-1, -1), Point(1, 1), 30, 30)
sub_mesh = RectangleMesh(Point(0, 0), Point(1, 1), 30, 30)

P1 = FiniteElement('P', mesh.ufl_cell(), 1)
element = MixedElement([P1,P1,P1])
Mixed_Space = FunctionSpace(sub_mesh, element)
U = Function(Mixed_Space) 
u1,u2,u3 = split(U)

ox_0 = Expression('x[0]', degree=0)
Tn_0 = Expression('x[1]', degree=0)
Th_0 = Expression('x[0]*x[1]', degree=0)

ox0 = interpolate(ox_0, Mixed_Space.sub(0).collapse())
Tn0 = interpolate(Tn_0, Mixed_Space.sub(1).collapse())
Th0 = interpolate(Th_0, Mixed_Space.sub(2).collapse())

I think the greatest change is P1 = FiniteElement('P', mesh.ufl_cell(), 1) with the mesh.ufl_cell(). I hope it will work for you.

Thanks for the email. I really appreciate it. The problem I think comes from extracting the submesh using MeshView.create and then building a MixedFunctionSpace on that. Somehow, somewhere the “Re-ordering map” get cleared or doesn’t even get created using mpi.

Thanks dokken for the reply. The version I am using is the one you helped us build here.
So, you don’t get any errors and it runs smoothly with that .h5 mesh I had on git?

Yes, everything runs smoothly.

Interesting. Is this Docker version different from the one here?
Is it only compatible with Docker?

I did try without the docker version building dolfin from source, as well as using the docker image.

That’s very strange. And did you do an MPI run? Because in serial it runs fine for me as well.

I did, I tested it with 3 and 5 processors.

So, I did it with 3 and 5 processors and it ran smoothly. As a matter of fact, it runs smoothly with up to 8 processors. But with 9 and more it gives that error. Now I am more confused. Do you know why this happens?

Hello nami,

a suggestion:-

Some of our jobs also fail above a certain number of processors, I always assumed that was an issue with out MPI installation rather than fenics , I think we confirmed this by testing some of the PETSC example runs ( not from fenics but from the install scripts of petsc) and gradually increasing the number of processors till it failed. Its a while since we did this so I can’t remember the details of the tests.

regards

Thomas

Thank you Thomas.
So I think I figured out what was wrong. I don’t know how precise this can be but might be of interest to some. Turns out whenever I create my mesh using the OpenCascade Geometry kernel of GMSH this error happens. The same mesh using the Built-in kernel doesn’t cause any problems no matter how many processors I used.