Accessing the global mesh (created with comm_self) in parallel

Hi everyone,
I would need to access both the local and global mesh data from each MPI process when running FEniCS in parallel. I know that the easiest way to do so is to init two different meshes:

from fenics import *

local_mesh = UnitSquareMesh(20, 20)
global_mesh = UnitSquareMesh(MPI.comm_self, 20, 20)

However, I would like a function which allows me to get the global mesh directly from the local one, something like:

local_mesh = UnitSquareMesh(20, 20)
global_mesh = get_global_mesh(local_mesh)

At the moment, the only way I found to do so is to save the local mesh to a temp file and to reload it for each process, like in the following:

def get_global_mesh(local_mesh):
    # create mesh file in temp folder
    tmp_folder = ".tmp"
    mesh_path_str = tmp_folder + "/mesh.xmdf"
    # create local mesh file
    local_mesh_xdmf = fenics.XDMFFile(mesh_path_str)
    # write mesh to file
    # create global mesh file
    global_mesh_xdmf = fenics.XDMFFile(fenics.MPI.comm_self, mesh_path_str)
    # create global empty mesh
    global_mesh = fenics.Mesh(fenics.MPI.comm_self)
    # load global mesh from file
    # remove temp file
    if fenics.MPI.comm_world.Get_rank() == 0:
    return global_mesh

However, I was wondering if there is a cleaner way to do the same. I didn’t find anything in the FEniCS documentation. Any idea?
Thank you in advance.

I’m not sure what your end-goal is with this approach.
However, I would not recommend it, what you call the global mesh creates a copy of the whole mesh geometry and topology on every process. This is going to drastically increase memory usage, and not scale well when increasing the number of processes.

If you could add some motivation as to why you need the whole mesh on every process, that could make it easier for people to help you.

Hi Dokken, and thank you for the quick answer.

I guess you are right. After a deeper analysis of the reasons why I need what I called a “global_mesh”, I realized I just need to do these two things:

  1. I need to access to the global_mesh.coordinates() array in order to randomly pick an exact number of points (e.g. 200) without duplicates.
  2. I need to check if a point is inside the global mesh

For the point 1, a naive approach which does not require a global_mesh could be to gather all the local_mesh.coordinates() in one process and then pick the number I need, doing something like:

import fenics
import random

comm = fenics.MPI.comm_world
rank = comm.Get_rank()
# local mesh coordinates
lmc = local_mesh.coordinates()
lmc_arrays = comm.gather(local_mesh_coordinates, 0)
if rank == 0:
	# global mesh coordinates
	gmc = [c for array in lmc_arrays for c in array]
	# remove duplicates, somehow
	gmc = remove_duplicates_somehow(gmc)
	# pick some of them
	picked_c = random.sample(gmc, 200)
	picked_c = None
# bcast them
picked_c = comm.bcast(picked_c, 0)

Do you think this could be a more scalable approach?

For the point 2 I could use a similar approach. I could check for each process if the given point is inside the local mesh simply using:

local_bounding_box_tree = local_mesh.bounding_box_tree()

is_point_inside = local_bounding_box_tree.compute_first_entity_collision(point_to_check) <= local_mesh.num_cells()

And then I could gather the variable is_point_inside for all the processes and simply, if none of them is True, conclude that the point is not inside the global_mesh. What do you think?

I think this is a better approach. To get the unique vertices, you could do the following:

from dolfin import *
import numpy as np
mesh = UnitSquareMesh(10, 10)
top = mesh.topology()
global_vertex_indices = top.global_indices(0)

num_global_vertices = mesh.num_entities_global(0)

lmc = mesh.coordinates()

comm = MPI.comm_world
lmc_arrays = comm.gather(lmc, 0)
vertex_maps = comm.gather(global_vertex_indices, 0)

rank = comm.Get_rank()
if rank == 0:
    vertices = np.zeros((num_global_vertices, lmc.shape[1]))
    for coords, verts in zip(lmc_arrays, vertex_maps):
        vertices[verts] = coords

Hi Dokken,
Sorry for reporting a bit late. I had the the time to test your solution and it works perfectly. Thank you for your help.