Hello All,
I am trying to write the restart code for my optimization problem using MPI, where I have outputs from the global mesh as numpy arrays for the last iteration, which I have to feed into the restart process but it needs to be scattered first to the locally owned processes.
I have simply turned the code to MWE as:
import numpy
from mpi4py import MPI
from petsc4py import PETSc
import numpy as np
from dolfinx.fem import (Expression, Function, FunctionSpace, dirichletbc,
form, functionspace, locate_dofs_topological, Constant, assemble_scalar, assemble)
from dolfinx.mesh import (CellType, GhostMode, create_box, create_rectangle,
locate_entities_boundary, meshtags)
def mpi_print(s):
print(f"Rank {comm.rank}: {s}")
dtype = PETSc.ScalarType # type: ignore
comm = MPI.COMM_WORLD
rank = comm.rank
size = comm.size
domain = create_rectangle(MPI.COMM_WORLD, [np.array([0, 0]), np.array([2, 2])], [2, 2],
CellType.triangle, ghost_mode=GhostMode.shared_facet)
num_elements = domain.topology.index_map(2).size_local
num_nodes = domain.topology.index_map(0).size_local
# Summing up the counts from all processes in MPI_COMM_WORLD
num_elements_global = MPI.COMM_WORLD.allreduce(num_elements, op=MPI.SUM)
num_nodes_global = MPI.COMM_WORLD.allreduce(num_nodes, op=MPI.SUM)
if (rank==0):
mpi_print(f"Number of local elements: {num_elements}")
mpi_print(f"Number of local nodes: {num_nodes}")
mpi_print(f"Number of global elements: {num_elements_global}")
mpi_print(f"Number of global nodes: {num_nodes_global}")
# Fespace
Vh0 = functionspace(domain, ("DG", 0))
numpy_array = numpy.arange(num_elements_global) # Global input
uD = Function(Vh0)
vec = uD.vector.getArray() # local output
print(len(numpy_array), numpy_array)
print(len(vec), vec)
# for finding the local range
E = Function(Vh0)
E.x.scatter_forward()
imap = E.function_space.dofmap.index_map
local_range = np.asarray(imap.local_range, dtype=np.int32) * \
E.function_space.dofmap.index_map_bs
# Communicate ranges and local data
ranges = comm.gather(local_range, root=0)
if(rank==0):
for r in ranges:
vec = numpy_array[r[0]:r[1]]
print(vec)
For which the output is
Rank 0: Number of local elements: 4
Rank 0: Number of local nodes: 4
Rank 0: Number of global elements: 8
Rank 0: Number of global nodes: 9
8 [0 1 2 3 4 5 6 7]
4 [0. 0. 0. 0.]
[4 5 6 7]
8 [0 1 2 3 4 5 6 7]
4 [0. 0. 0. 0.]
[0. 0. 0. 0.]
The expected output for the vec is
[0 1 2 3]
[4 5 6 7]
So, in short my question is simple, is there a simple way to scatter the global numpy array to locally owned array?
This post was helpful from converting the local to global (gather), but not for global to local (scatter)
In parall computation(mpirun), how to communicate(scatter & gather) the data between local processes and some global process?
Thanks!