IPOPT doesn't work in Parallel with MPI

Hello everyone,

I tried to solve time distributed control demo with IPOPT solver in parallel using MPI and it didn’t work.
Usually (or at least with lbfgsb) I can run any serial dolfin adjoint programms with MPI just running it with comand mpirun -n

For instance running the following MWE with mpirun -n 2 python 3 works just fine

from fenics import *
from fenics_adjoint import *
from collections import OrderedDict
set_log_active(False)
import numpy as np

data =Constant(2)
nu = Constant(1e-5)

dt = Constant(0.1)
T = 2

PDEs_numb = 1
numDataPerRank = int(PDEs_numb)

alphas = np.ones(PDEs_numb)
my_alpha = alphas

nx = ny = 20
mesh = UnitSquareMesh( nx, nx)
V = FunctionSpace(mesh, "CG", 1)

ctrls = OrderedDict()
t = float(dt)
while t <= T: 
    ctrls[t] = Function(V)
    t += float(dt)

def poisson(alpha, u_0, f):
    u, v = TrialFunction(V), TestFunction(V)
    bc = DirichletBC(V, Constant(0), 'on_boundary')
    a = (u*v + dt*inner(Constant(alpha)*nu*grad(u), grad(v)))*dx
    L = (u_0 + dt*f)*v*dx
    uh = Function(V)
    solve(a == L, uh, bc) 
    return uh

def solve_heat(ctrls):
    u_0 = []
    f = Function(V, name="source")
    d = Function(V, name="data")
    d.assign(project(data, V))  
    t = float(dt)
    j = 0

    for i in range(numDataPerRank):
        u_0.append(Function(V, name="solution"))
        j += 0.5 * float(dt) * assemble((u_0[i] - d) ** 2 * dx)   
    
    while t <= T:
        f.assign(ctrls[t])
        
        for i in range(numDataPerRank):
            u_0[i] = poisson(my_alpha[i], u_0[i], f)
            j += 0.5 * float(dt) * assemble((u_0[i] - d) ** 2 * dx)

        t += float(dt)

    return u_0, d, j

u, d, j = solve_heat(ctrls)

alpha1 = Constant(1e-5)
regularisation = alpha1/2*sum([1/dt*(fb-fa)**2*dx for fb, fa in
    zip(list(ctrls.values())[1:], list(ctrls.values())[:-1])])

J = j + assemble(regularisation)
m = [Control(c) for c in ctrls.values()]

rf = ReducedFunctional(J, m)

opt_ctrls = minimize(rf, options={"maxiter": 3, 'disp': True})

Solving the same problem (MWE below) also on 2 processes (with mpirun -n 2 python 3) but with IPOPT solver, it freezes at the first iteration. When I run it on single process (mpirun -n 1 python 3) or in serial (python 3) it works fine though.

from fenics import *
from fenics_adjoint import *
from collections import OrderedDict
set_log_active(False)
import numpy as np
try:
    from pyadjoint import ipopt  # noqa: F401
except ImportError:
    print("""This example depends on IPOPT and Python ipopt bindings. \
  When compiling IPOPT, make sure to link against HSL, as it \
  is a necessity for practical problems.""")
    raise


data =Constant(2)
nu = Constant(1e-5)

dt = Constant(0.1)
T = 2

PDEs_numb = 1
numDataPerRank = int(PDEs_numb)

alphas = np.ones(PDEs_numb)
my_alpha = alphas

nx = ny = 20
mesh = UnitSquareMesh( nx, nx)
V = FunctionSpace(mesh, "CG", 1)

ctrls = OrderedDict()
t = float(dt)
while t <= T: 
    ctrls[t] = Function(V)
    t += float(dt)

def poisson(alpha, u_0, f):
    u, v = TrialFunction(V), TestFunction(V)
    bc = DirichletBC(V, Constant(0), 'on_boundary')
    a = (u*v + dt*inner(Constant(alpha)*nu*grad(u), grad(v)))*dx
    L = (u_0 + dt*f)*v*dx
    uh = Function(V)
    solve(a == L, uh, bc) 
    return uh

def solve_heat(ctrls):
    u_0 = []
    f = Function(V, name="source")
    d = Function(V, name="data")
    d.assign(project(data, V))  
    t = float(dt)
    j = 0

    for i in range(numDataPerRank):
        u_0.append(Function(V, name="solution"))
        j += 0.5 * float(dt) * assemble((u_0[i] - d) ** 2 * dx)   
    
    while t <= T:
        f.assign(ctrls[t])
        
        for i in range(numDataPerRank):
            u_0[i] = poisson(my_alpha[i], u_0[i], f)
            j += 0.5 * float(dt) * assemble((u_0[i] - d) ** 2 * dx)

        t += float(dt)

    return u_0, d, j

u, d, j = solve_heat(ctrls)

alpha1 = Constant(1e-5)
regularisation = alpha1/2*sum([1/dt*(fb-fa)**2*dx for fb, fa in
    zip(list(ctrls.values())[1:], list(ctrls.values())[:-1])])

J = j + assemble(regularisation)
m = [Control(c) for c in ctrls.values()]

rf = ReducedFunctional(J, m)


problem = MinimizationProblem(rf)
parameters = {"acceptable_tol": 1.0e-3, "maximum_iterations": 3}
solver = IPOPTSolver(problem, parameters=parameters)
opt_ctrls = solver.solve()

Am I missing something? Or IPOPT just doesn’t support MPI?

Best regards

Sorry for being pushy, but could you give any feedback?
I’d really appreciate it.

I believe you are using Linux Ubuntu. I also had the same problem, I couldn’t solve it. There is a post here on the Discourse (Code not running in parallel) where they comment on incompatibilities between mpi, openmpi, etc … on ubuntu. On MacOS it runs very well.

Thank you for the response!

You are right, I do use linux ubuntu! + docker image of dolfin adjoint
But everithing runs well with MPI except IPOPT solver.
Did you also have the problem specifically with IPOPT?

Yes. I use dolfin adjoint with IPOPT.

It would be helpful if you showed us the kind of error you are getting. Also, bear in mind that IPOPT is a serial program, it is not designed to run in parallel.

1 Like

Actually I’m not getting any error. It freezes at the first iteration. The last thing I see in terminal is:

**************************************************
*** Finding Acceptable Trial Point for Iteration 1:
**************************************************

--> Starting line search in iteration 1 <--
Storing current iterate as backup acceptable point.
The current filter has 0 entries.
minimal step size ALPHA_MIN = 0.000000E+00
Starting checks for alpha (primal) = 1.00e+00

and then it doesn’t go further.

I locked it up about parallelism of IPOPT and it depends on the solver. For instance MUMPS, which is Multifrontal Massively Parallel Solver, supports MPI. And If I’m not mistaken MUMPS is default solver according to this (line 1141). In addition as Rafael Ferro says, it works fine on Mac, so the reason might really be incompatibilities on linux.

I see, thank you.

Unfortunately I dont have mac on the machine with acceptable for my problem number cores, but I have windows.

I’ll try this tomorrow and leave my feedback here.

Since I have windows server 2008 and I’m already having troubles with installing docker toolbox, I’ll take my chances working further with lbfgsb from scipy minimize. But if it continues to show insufficient performance for my problem, I’ll have to go back to IPOPT and docker toolbox installation.

If anyone else had an experience working with IPOPT under MPI on Linux/Windows I would love to see your comments here.

IPOPT with ma97 solver resolves the issue, although manual installation is quite a challenge.

In my case, only the latest dev version of Fenics works correctly. So, I’ll leave the configuration below.

fenics-dijitso 2019.2.0.dev0
fenics-dolfin 2019.2.0.dev0
fenics-ffc 2019.2.0.dev0
fenics-fiat 2019.2.0.dev0
fenics-ufl 2021.1.0

dolfin-adjoint 2019.1.2

ipopt 3.12.4

cyipopt 1.0.3

Hi Ruslan,

Could you make IPOPT work on any docker container? I’ll appreciate it if you can help me with it.
Thanks,
Milad

Hi Milad,

no, I couldn’t. I ended up installing everything separately on Ubuntu (not using docker) and working from there.