I am migrating a project from dolfin to dolfinx and I am encountering difficulties with MPI. Are there some strong intrinsic differences between dolfin MPI and mpi4pi.MPI used in dolfinx?
My project used to run on several processors in dolfin. It now runs on one processor with dolfinx but ‘pauses’ indefinitely with 2 or more processors. This happens when the program calls ‘vector.getArray()’.
I am not able to recreate this error with a minimal example. The minimal example provided in the Poisson Equation tutorial runs fine on multiple processors with dolfinx.
There are no fundamental differences between the dolfin.MPI and mpi4py.MPI as dolfin.MPI simply wraps the mpi4py communicators, see: Bitbucket
I guess you are accessing the underlying PETSc-vector using u.vector.getLocal(). Is there a reason for not using u.x.array instead? This is the dolfin vector, which includes ghost values at the end of the array, as opposed to the PETSc-vector.
I will try with u.x.array as well as I don’t think there is a specific reason for using the PETSc-vector (although I am also using the setLocalValues method in that project). Any reason why this should freeze Fenicsx in Docker (Windows)?
As a quick test I also called vector.getArray in the minimal example taken from the tutorial which still ran well (see below)
from mpi4py import MPI
from dolfinx import mesh
domain = mesh.create_unit_square(MPI.COMM_WORLD, 8, 8, mesh.CellType.quadrilateral)
from dolfinx.fem import FunctionSpace
V = FunctionSpace(domain, ("CG", 1))
from dolfinx import fem
uD = fem.Function(V)
uD.interpolate(lambda x: 1 + x[0]**2 + 2 * x[1]**2)
vec=uD.vector.getArray()
print(vec)
I cannot see any reason for it freezing when accessing the local vector within PETSc. Not that vector.getLocal() is a call to the PETSc library, and not code inside dolfinx.
Have you checked if u.x.array gives the same error?
Without a MWE it is hard to give any further assistance.