Mpi_comm not working for mesh in dolfin 2019.1

Hi,

I have a fenics installation version 2019.1 in a remote machine.
I have a parallelized code which I have ran before in 2017.1 version.

from dolfin import *
from mpi4py import MPI
comm = MPI.COMM_WORLD


mesh = Mesh(mpi_comm(),mpath)

mpath is the path to .xml meshes

I keep on ending up with the error,

NameError: name ‘mpi_comm’ is not defined

Is this a known error ? Am i doing something wrong ?

There was a change in the API. The following runs in 2019.1:

from dolfin import *
mesh = UnitIntervalMesh(MPI.comm_world,10)
1 Like

Hi,

I have tried the change as you suggested, but I feel that the MPI wrapper from Fenics and MPI from mpi4py are conflicting. I made a python file just to see at what point it fails. After creating the mesh it fails again at creating function space.

from dolfin import *

from mpi4py import MPI as mpi

comm = mpi.COMM_WORLD

ip = comm.Get_rank()

print("Hello I am process ",ip)

mesh = UnitIntervalMesh(MPI.comm_world,10)

V = VectorFunctionSpace(mesh, “CG”, 1)

Hello I am process 0
Calling FFC just-in-time (JIT) compiler, this may take some time.
Calling FFC just-in-time (JIT) compiler, this may take some time.
Hello I am process 1
Traceback (most recent call last):
File “trial_elasticity.py”, line 10, in
V = VectorFunctionSpace(mesh, “CG”, 1)
File “/scinet/niagara/software/2018a/opt/intel-2018.2-intelmpi-2018.2/fenics/2017.2.0/lib/python3.6/site-packages/dolfin/function/functionspace.py”, line 222, in VectorFunctionSpace
return FunctionSpace(mesh, element, constrained_domain=constrained_domain)
File “/scinet/niagara/software/2018a/opt/intel-2018.2-intelmpi-2018.2/fenics/2017.2.0/lib/python3.6/site-packages/dolfin/function/functionspace.py”, line 31, in init
self._init_from_ufl(*args, **kwargs)
File “/scinet/niagara/software/2018a/opt/intel-2018.2-intelmpi-2018.2/fenics/2017.2.0/lib/python3.6/site-packages/dolfin/function/functionspace.py”, line 43, in _init_from_ufl
mpi_comm=mesh.mpi_comm())
File “/scinet/niagara/software/2018a/opt/intel-2018.2-intelmpi-2018.2/fenics/2017.2.0/lib/python3.6/site-packages/dolfin/jit/jit.py”, line 82, in mpi_jit
raise RuntimeError(error_msg)
RuntimeError: [Errno 30] Read-only file system: ‘/home/a/asarkar/sudhipv/.cache/dijitso’

Could you look in to this ?

It works perfectly fine for me:

>>> from dolfin import *
>>> from mpi4py import MPI as mpi
>>> msh = UnitIntervalMesh(mpi.COMM_WORLD,10)
>>> VectorFunctionSpace(msh,'CG',1)
FunctionSpace(Mesh(VectorElement(FiniteElement('Lagrange', interval, 1), dim=1), 0), VectorElement(FiniteElement('Lagrange', interval, 1), dim=1))

Although I am using dolfin-2019.1.0 as opposed to 2017.2 and intel_mpi-2018.

Hi,

Could try with printing the rank too ??
To get the rank you would need to call mpi from mpi4py right ?
But I am confused as to which mpi to use for the Mesh function call. mpi4py imported mpi or the MPI wrapper from dolfin ?

In my case I almost always use the MPI wrapper in dolfin and haven’t had to face any issues. You can use the inbuilt function has_mpi4py() to check if dolfin was correctly configured with mpi4py in case if you built it from source. For most standard installations this should not be an issue.