Hi everyone,
I would need to access both the local and global mesh data from each MPI process when running FEniCS in parallel. I know that the easiest way to do so is to init two different meshes:
I’m not sure what your end-goal is with this approach.
However, I would not recommend it, what you call the global mesh creates a copy of the whole mesh geometry and topology on every process. This is going to drastically increase memory usage, and not scale well when increasing the number of processes.
If you could add some motivation as to why you need the whole mesh on every process, that could make it easier for people to help you.
I guess you are right. After a deeper analysis of the reasons why I need what I called a “global_mesh”, I realized I just need to do these two things:
I need to access to the global_mesh.coordinates() array in order to randomly pick an exact number of points (e.g. 200) without duplicates.
I need to check if a point is inside the global mesh
For the point 1, a naive approach which does not require a global_mesh could be to gather all the local_mesh.coordinates() in one process and then pick the number I need, doing something like:
import fenics
import random
comm = fenics.MPI.comm_world
rank = comm.Get_rank()
# local mesh coordinates
lmc = local_mesh.coordinates()
lmc_arrays = comm.gather(local_mesh_coordinates, 0)
if rank == 0:
# global mesh coordinates
gmc = [c for array in lmc_arrays for c in array]
# remove duplicates, somehow
gmc = remove_duplicates_somehow(gmc)
# pick some of them
picked_c = random.sample(gmc, 200)
else:
picked_c = None
# bcast them
picked_c = comm.bcast(picked_c, 0)
Do you think this could be a more scalable approach?
For the point 2 I could use a similar approach. I could check for each process if the given point is inside the local mesh simply using:
And then I could gather the variable is_point_inside for all the processes and simply, if none of them is True, conclude that the point is not inside the global_mesh. What do you think?