Interpolation between different meshes "create_submesh" DOLFINX

I’m running dolfinx where I have two meshes (msh1,msh2) created from 1 parent mesh (msh_org).

the function u2 obtained from interpolating u1 depends on the comm.size.

for comm.size in range(8)
mpirun -np comm.size python test

u2 doesn’t lead to the same vector

a pseudo code of the situation
1- create msh_org

2- create msh1 and msh2 using create_submesh
“msh, cell_tags, facet_tags =
geometry.gmsh_model, comm, 0,partitioner=dolfinx.mesh.create_cell_partitioner(dolfinx.mesh.GhostMode.shared_facet))”

3- transfer mesh tags from mesh_org to submeshes

4 - V1 and V2 function space on msh1 and msh2 respectively

5 - u1 and u2 functions on V1 and V2 respectively

6 - interpolate random function on u1
" u1.interpolate(lambda x: np.vstack((x[0], x[1],x[2]))) "

7 - u2.interpolate(u1)

8 - gather u2 on rank 1 and sort the array

u2 obtained using 1 rank is different from u2 with rank > 1

Note: not all rank > 1 lead to an incorrect u2

any help with this ?

thank you

Please make a code that can reproduce the error.
Its hard to give pointers when you write pseudocode.

Please try to make an example on a unit square or cube.

Hi @dokken:

here is an example with unit_cube

import numpy as np
import ufl
from dolfinx.fem import ( Function, 
from mpi4py import MPI
from petsc4py import PETSc

from dolfinx import la
import dolfinx

dtype = PETSc.ScalarType

msh1 = dolfinx.mesh.create_unit_cube(MPI.COMM_WORLD, 10, 10, 10, dolfinx.mesh.CellType.tetrahedron)

submesh_entities = dolfinx.mesh.locate_entities(msh1, dim=3, marker=lambda x: (x[0] < 0.5) & (x[1] < 0.5))
msh2, entity_map, vertex_map, geom_map = dolfinx.mesh.create_submesh(msh1, msh1.topology.dim, submesh_entities)

V1 = VectorFunctionSpace(msh1, ("Lagrange", 1))
V2 = VectorFunctionSpace(msh2, ("Lagrange", 1))

u1 = Function(V1)
u2 = Function(V2)

u1.interpolate(lambda x: np.vstack((x[0], x[1],x[2])))


def store_vec_rank1(functionspace, vector):
    imap = vector.function_space.dofmap.index_map
    local_range = np.asarray(imap.local_range, dtype=np.int32) * vector.function_space.dofmap.index_map_bs
    size_global = imap.size_global * vector.function_space.dofmap.index_map_bs

    # Communicate ranges and local data
    ranges = MPI.COMM_WORLD.gather(local_range, root=0)
    data = MPI.COMM_WORLD.gather(vector.vector.array, root=0)
    # print(data)
    # Communicate local dof coordinates
    x = functionspace.tabulate_dof_coordinates()[:imap.size_local]
    x_glob = MPI.COMM_WORLD.gather(x, root=0)

    # Declare gathered parallel arrays
    global_array = np.zeros(size_global)
    global_x = np.zeros((size_global, 3))
    u_from_global = np.zeros(global_array.shape)

    x0 = functionspace.tabulate_dof_coordinates()

    if MPI.COMM_WORLD.rank == 0:
        # Create array with all values (received)
        for r, d in zip(ranges, data):
            global_array[r[0]:r[1]] = d

    return global_array

solution_vec = np.sort(np.array(store_vec_rank1(V2,u2)))
solution_vec = solution_vec.reshape((-1, 1))

if MPI.COMM_WORLD.size == 1:
    np.savetxt('solution_1rank_output.txt', solution_vec, delimiter=',')
    vec_check = np.genfromtxt('solution_1rank_output.txt', delimiter=",")

if MPI.COMM_WORLD.rank == 0 and MPI.COMM_WORLD.size > 1:
    for n in range(solution_vec.size):
        if abs(vec_check[n] - solution_vec[n]) > 1e-6 :
            print(f"solution vec incorrect in {n}th entry \n {vec_check[n]} compared to  {solution_vec[n]} np = {MPI.COMM_WORLD.size} !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!")
    print(f"all values match with np = {MPI.COMM_WORLD.size}")