Mpi4py.futures don't work with meshing

I am trying to use an MPIPoolExecutor like:

# I did:
# conda create -n mpi_fenics fenics mpi4py
# source activate mpi_fenics
# mpiexec -n 2 python -m mpi4py.futures test.py

import os
from mpi4py.futures import MPIPoolExecutor

os.environ['OMP_NUM_THREADS'] = '1'

def f():
    import fenics
    mesh = fenics.Mesh()
    return 1

if __name__ == "__main__":  # ← use this, see warning @ https://bit.ly/2HAk0GG
    ex = MPIPoolExecutor()
    fut = ex.submit(f)
    result = fut.result()
    print(result)

This will be stuck forever, I am not sure what is happening. The CPU usage is 100% forever too. I think this is a bug.

My conda list:

(mpi_fenics) x86_64-conda_cos6-linux-gnu   ➜  ~  conda list --export
# This file may be used to create an environment using:
# $ conda create --name <env> --file <this file>
# platform: linux-64
binutils_impl_linux-64=2.31.1=h6176602_1
binutils_linux-64=2.31.1=h6176602_7
boost-cpp=1.70.0=ha2d47e9_0
bzip2=1.0.6=h14c3975_1002
ca-certificates=2019.3.9=hecc5488_0
certifi=2019.3.9=py37_0
cmake=3.14.5=hf94ab9c_0
curl=7.64.1=hf8cf82a_0
eigen=3.3.7=h6bb024c_1000
expat=2.2.5=hf484d3e_1002
fastcache=1.1.0=py37h516909a_0
fenics=2019.1.0=py37_2
fenics-dijitso=2019.1.0=py_2
fenics-dolfin=2019.1.0=py37h80b64ce_2
fenics-ffc=2019.1.0=py_2
fenics-fiat=2019.1.0=py_2
fenics-libdolfin=2019.1.0=h9041177_2
fenics-ufl=2019.1.0=py_2
gcc_impl_linux-64=7.3.0=habb00fd_1
gcc_linux-64=7.3.0=h553295d_7
gmp=6.1.2=hf484d3e_1000
gmpy2=2.0.8=py37hb20f59a_1002
gxx_impl_linux-64=7.3.0=hdf63c60_1
gxx_linux-64=7.3.0=h553295d_7
hdf5=1.10.4=mpi_mpich_ha7d0aea_1006
hypre=2.15.1=hc98498a_1001
icu=58.2=hf484d3e_1000
krb5=1.16.3=h05b26f9_1001
libblas=3.8.0=7_openblas
libcblas=3.8.0=7_openblas
libcurl=7.64.1=hda55be3_0
libedit=3.1.20170329=hf8c457e_1001
libffi=3.2.1=he1b5a44_1006
libgcc-ng=9.1.0=hdf63c60_0
libgfortran-ng=7.3.0=hdf63c60_0
liblapack=3.8.0=7_openblas
libssh2=1.8.2=h22169c7_2
libstdcxx-ng=9.1.0=hdf63c60_0
libuv=1.29.1=h516909a_0
metis=5.1.0=hf484d3e_1003
mpc=1.1.0=hb20f59a_1006
mpfr=4.0.2=ha14ba45_0
mpi=1.0=mpich
mpi4py=3.0.1=py37hf046da1_0
mpich=3.2.1=hc99cbb1_1010
mpmath=1.1.0=py_0
mumps-include=5.1.2=1007
mumps-mpi=5.1.2=h5bebb2f_1007
ncurses=6.1=hf484d3e_1002
numpy=1.15.4=py37h8b7e671_1002
openblas=0.3.5=h9ac9557_1001
openssl=1.1.1b=h14c3975_1
parmetis=4.0.3=hb1a587f_1002
petsc=3.11.1=h624fa55_1
petsc4py=3.11.0=py37h906564f_0
pip=19.1.1=py37_0
pkg-config=0.29.2=h14c3975_1005
pkgconfig=1.3.1=py37_1001
ptscotch=6.0.6=h5a4526e_1002
pybind11=2.2.4=py37hc9558a2_1001
python=3.7.3=h5b0a415_0
readline=7.0=hf8c457e_1001
rhash=1.3.6=h14c3975_1001
scalapack=2.0.2=h2831592_1005
scotch=6.0.6=h491eb26_1002
setuptools=41.0.1=py37_0
six=1.12.0=py37_1000
slepc=3.11.1=h00d104f_0
slepc4py=3.11.0=py37hce3d510_0
sqlite=3.28.0=h8b20d00_0
suitesparse=4.5.6=heab0a99_1202
sympy=1.4=py37_0
tbb=2019.7=hc9558a2_0
tk=8.6.9=hed695b0_1002
wheel=0.33.4=py37_0
xz=5.2.4=h14c3975_1001
zlib=1.2.11=h14c3975_1004

Just a suggestion (don’t know if this can make a difference), but try setting the environment variable OMP_NUM_THREADS directly after importing os, and let this be the very first thing you do anyway.

Otherwise, try to set the variable in the console beforehand and see if that helps.

Thanks for your reply @plugged, unfortunately that didn’t work.

I figured it out (with help), I posted the solution here.