Getting MPI rank as 0 when running in parallel locally

Hello,

I installed FEniCs latest version (2019.1.0) from ubuntu PPA on my local desktop. There is some issue with getting the correct rank and size when I run the following in parallel.

from dolfin import * 
from mpi4py import MPI as mpi 

comm= MPI.comm_world
mpi_rank = comm.Get_rank()
mpi_size = comm.size
print("rank / size: ", mpi_rank, "/", mpi_size)

pycomm= mpi.COMM_WORLD 
pyid= pycomm.Get_rank()
pysize= pycomm.size 
print("pyid/ pysize: ", pyid, "/", pysize)

When I run with 4 processes, I get this output:

rank / size:  0 / 1
pyid/ pysize:  0 / 1
rank / size:  0 / 1
pyid/ pysize:  0 / 1
rank / size:  0 / 1
pyid/ pysize:  0 / 1
rank / size:  0 / 1
pyid/ pysize:  0 / 1

I checked this and this but could not find a solution while this is related to docker installation.
Could you please let me know how to resolve this issue.

Are you sure about your MPI installation ?

I would generally recommed using either a Docker image or a Singularity container (more suitable for clusters) to run things in parallel and out-of-the-box. The above image has super_lu_dist installed which could be a useful alternative to MUMPS for larger problems. You could also simply run the official image at the downloads page: https://fenicsproject.org/download/

1 Like