 # Gather solution in parallel

I am doing a project which solves a PDE iteratively, with a global operation between two successive iterations.
Here is the pipeline:
solve pde
gather solution, do a global operation, send new coefficients
solve pde with new coefficients

How can I gather solution and send computed new coefficients to each processors in each step? Is there an easy functionality to do this?

Something like this?

``````# u: solution Function from PDE

mpi_comm = u.function_space().mesh().mpi_comm()
array = u.vector().get_local()

# gather solution from all processes on proc 0
array_gathered = mpi_comm.gather(array, root=0)

# compute coefficients on proc 0
if mpi_comm.Get_rank() == 0:
coef = compute_coefficients_glob(array_gathered)
else:
coef = None

# broadcast from proc 0 to other processes
mpi_comm.Bcast(coef, root=0)
``````
1 Like

I don’t know how to gather those data in the correct order or how to get the global indices of u.vector(). So after do this, I has a gathered solution, which are in wrong order. I don’t know how to determine the order of these data.

I figure out how to map local index to global index, there is a subclass, DofMap, in FunctionSpace that has a method that provide that. I can then gather the function in a processor so that it is in the order of global dofs. Now as my function requires the function values in the vertices order. Then, how can I map dofs to vertices in parallel computing? I know a function vertex_to_dof_map can do the job without running with MPI though, it provide weird results when runing parallely. For example,

``````n = 100
mesh = UnitSquareMesh(comm_world, n, n)
U = FunctionSpace(mesh, 'CG', 1)
v2d = vertex_to_dof_map(U)
``````

Then when I run the scripts with 2 processors, they got (5094,5107) dofs respectively. However, v2d in both processors are of shape (5151,) and range in 0-5150.
I must has some misunderstanding here. v2d has more than we need in each processors. Besides, as they are both range in 0-5150, I guess they are local vertex indicies.
How can I map local vertex indicies to a global one? I check the documentation and find nothing. Any suggestion is appreciated.

I figured this out and listed here to help others with the same need.

One thing I noticed is that,
`mesh.coordinates()`
will return coordinates in coordinates order, and
`from fenics import dof_to_vertex_map`
this method, when called with P1 function space as parameter will return a map from degree of freedoms to vertex order (coordinates order). say we have the map, d2v.
Thus,

F1 = FunctionSpace(mesh, ‘P’, 1)
dof_map = F1.dofmap()
first_dof, last_dof = dof_map.ownership_range()
local_counts = last_dof - first_dof
coord_dof = mesh.coordinates()[d2v[:local_counts]]

we get coordinates in the order of dofs, which is exactly the order of,
`u.vector()`
The reason I slice d2v to :local_counts (and thus coordiantes) is that it contain more than we need, I don’t quite figure out why here. Maybe there is some overlap between different partitions.
Then we use MPI to gather all coordinates, we sort it to get a map that transfer dofs to vertex. Let’s say we have gathered coordinates,
`coord_all`
we can get the order by,

ind = np.lexsort((coord_all[:, 0], coord_all[:, 1]))

Finally, we can gather the solution as a numpy array then change it to the coordinates order simply by,

u.vector().get_local()
u_all[ind]

To scatter back to each processor, simply reverse ind,

ind_inv[ind] = np.arange(0, len(ind), dtype=int)
scatter_buf = f_all[ind_inv]