MPI communicator in Fenics 2018.1

Dear all,

for exchanging data (Numpy arrays, to be precise) between Fenic’s processes on 2017.2, I accessed the MPI communicator equivalent to mpi4py’s MPI.COMM_WORLD with

comm = mesh.mpi_comm()
comm = comm.tompi4py()

and used the standard mpi4py functions such as

tmp_glob=comm.gather(tmp_loc,root=0)
ct_pos=comm.bcast(ct_pos,root=0)

for numpy arrays tmp_glob, tmp_loc and ct_pos or

comm.Get_rank()

etc.

However, trying to access the MPI communicator in Fenics 2018.1 like shown above doesn’t work anymore.
But when using the new function

comm = MPI.comm_world

I get errors like

if comm.Get_rank() == 0:
AttributeError: ‘dolfin.cpp.MPICommWrapper’ object has no attribute ‘Get_rank’

or similar for gathering and broadcasting.

That’s why I am wondering how to access the MPI communicator on Fenics 2018.1 making me able to use the same mpi4py functions for process communication as on 2017.2.

Thanks in advance !

I’m surprised you’re getting the last error mentioned. The following script runs without error for me, using FEniCS 2018.1.0:

from dolfin import *
from numpy import array
from mpi4py import MPI as pyMPI
comm = MPI.comm_world
rank = comm.Get_rank()  # no AttributeError
sumOfRanks = comm.allreduce(array([rank,],dtype='int32'),op=pyMPI.SUM)
if(rank==0):
    print(sumOfRanks[0])

Does it run correctly for you?

Trying your code, I get the same attribute error as described in my post above:

Traceback (most recent call last):
File “mpitest.py”, line 5, in
rank = comm.Get_rank() # no AttributeError
AttributeError: ‘dolfin.cpp.MPICommWrapper’ object has no attribute ‘Get_rank’

Update: The problem seems to be related to an installation issue. I tested the code on a Fenics version running in a docker container an everything worked fine.

What type of installation gave you the error? Was it a local build or an official distribution?

It was an installation built on a HPC cluster. Actually, my university’s HPC service team was able to fix the problem: although I was able to run calculations in parallel, dolfin seemed not to be configured with mpi4py…

Hi,

I am also trying to run fenics in parallel using “mpi_comm_self” in a remote machine. I am using dolfin 2019.1. How did you fix this issue ?