Expression class causing trouble for Python embedded in C++

Using dolfin version 2019.1.0 and (Open MPI) 2.1.1.

I am embedding Python in C++ (to get both the Python interface for FEniCs and access to mpi functions not available in mpi4py). The Expression class is giving me some trouble as follows:

Minimal example, 2 files:

min_ex.cpp

#include “mpi.h”
#include “Python.h”
#include “iostream”

// compile via ““mpic++ -I /usr/include/python3.6m/ min_ex.cpp -lpython3.6m””, or similar, depending what python version you are using
int main(int argc, char *argv){
MPI_Init(&argc, &argv);
int ID_SELF;
MPI_Comm_rank(MPI_COMM_WORLD, &ID_SELF);

Py_Initialize(); // Python initialize

PyRun_SimpleString("import sys; import os; sys.path.append(os.getcwd())");
PyObject *pFile = PyImport_Import(PyUnicode_FromString("py_min"));

std::cout << "all done " << ID_SELF << std::endl;
Py_Finalize(); // Python finalize
std::cout << "py finalized " << ID_SELF << std::endl;
MPI_Finalize();
return 0;
}

py_min.py

import dolfin as dol
from mpi4py import MPI

mesh = dol.UnitSquareMesh(MPI.COMM_SELF, 10, 10)
V = dol.FunctionSpace(mesh, “CG”, 1)
u0 = dol.interpolate(dol.Expression(“1”, degree = 1), V)
print(‘interpolation successful’, u0)

Running this works, except for MPI_Finalize(); always giving

*** The MPI_Comm_rank() function was called after MPI_FINALIZE was invoked.
*** This is disallowed by the MPI standard.
*** Your MPI job will now abort.

The culprit for this appears to be the dol.Expression resp. interpolate. I saw that in previous FEniCs versions there was an optional parameter mpi_comm for Expressions, but it seems to have disappeared? I also tried using my own class, inherited from UserExpression, but the problem persists.

Any help would be appreciated.

EDIT: Incorrect answer.

Changing py_min.py to

import dolfin as dol
#from mpi4py import MPI

#mesh = dol.UnitSquareMesh(MPI.COMM_SELF, 10, 10)
mesh = dol.UnitSquareMesh(dol.MPI.comm_self, 10, 10)
V = dol.FunctionSpace(mesh, "CG", 1)
u0 = dol.interpolate(dol.Expression("1", degree = 1), V)
print("interpolation successful", u0)

seems to do the trick.

Glad to hear it can be fixed, thanks! However, it does not appear to work for me. What dolfin and MPI version are you using?

Okay, my old answer was incorrect, it doesn’t work on my system either.

I took another look at this just now, and one thing that does seem to work is modifying the C++ file to

#include "mpi.h"
#include "Python.h"
#include "iostream"

// compile via "mpic++ -I /usr/include/python3.6m/ min_ex.cpp -lpython3.6m",
// or similar, depending what python version you are using
int main(int argc, char *argv[]){
  
  //MPI_Init(&argc, &argv);  
  Py_Initialize(); // Python initialize
  PyRun_SimpleString("import dolfin"); // Initializes MPI
  
  int ID_SELF;
  MPI_Comm_rank(MPI_COMM_WORLD, &ID_SELF);
  
  PyRun_SimpleString("import sys; import os; sys.path.append(os.getcwd())");
  PyObject *pFile = PyImport_Import(PyUnicode_FromString("py_min"));
  
  std::cout << "all done " << ID_SELF << std::endl;
  Py_Finalize(); // Python finalize
  std::cout << "py finalized " << ID_SELF << std::endl;
  //MPI_Finalize();
  return 0;
}

This kind of works. For more than 1 process it will complain about inproper exiting, but one can mute that with an additional -quiet flag for mpirun/mpiexec.

Thanks!

I know this is quite along time after the original post. However, I figured it is reasonable to note that you can send in an MPI communicator to the Expression class as a keyword argument, as shown in: Error with IO in parallel - - #4 by dokken