Parallel multiple tasks in class-based program

I have few different equations, that i need to solve. Every equation takes approximately 1 minute to solve, but N equations take N minutes to solve if I will solve their sequential. It is very long for me and I want to solve their parallel. I use python multiprocessing module, but it throws me a error:

TypeError: can’t pickle dolfin.cpp.generation.UnitSquareMesh objects

I write my code in class - based style (I think, it’s the source of problem). On the example of poisson equation, it looks something like this:

class PoissonSolver:
    class Boundary(SubDomain):
        def inside(self, x, on_boundary):
            return on_boundary

    def run_solver(self):
        self.__define_mesh()
        self.__define_bc()
        self.__define_functions()
        self.__run_processes()

    def __define_mesh(self):
        self.mesh = UnitSquareMesh(30, 30)
        self.V = FunctionSpace(self.mesh, "Lagrange", 1)

    def __define_bc(self):
        self.u_D = Expression('1 + x[0] * x[0] + 2 * x[1] * x[1]', degree=2)
        self.bc = DirichletBC(self.V, self.u_D, self.Boundary())

    def __define_functions(self):
        self.u = TrialFunction(self.V)
        self.v = TestFunction(self.V)
        self.a = dot(grad(self.u), grad(self.v)) * dx
        self.f = [Constant(-i) for i in range(10)]
        self.L = [fun * self.v * dx for fun in self.f]

    def __run_processes(self):
        pool = multiprocessing.Pool(processes=4)
        pool.map(self.solve_eq, range(10))

    def solve_eq(self, i):
        u = Function(self.V)
        solve(self.a == self.L[i], u, self.bc)
        File(self.mesh.mpi_comm(), 'foo%g.xml' % i) << u


if __name__ == "__main__":
    solver = PoissonSolver()
    solver.run_solver()

I know that the multiprocessing module use pickle to share data between processes, but how can I get around it and solve my problem? Thank you for answers.
Ubuntu, dolfin-version: 2018.1.0.

Hi, consider

from dolfin import *
import multiprocessing


def poisson_solver(i):
    mesh = UnitSquareMesh(MPI.comm_self, 30, 30)
    V = FunctionSpace(mesh, "Lagrange", 1)

    u_D = Expression('1 + x[0] * x[0] + 2 * x[1] * x[1]', degree=2)
    bc = DirichletBC(V, u_D, 'on_boundary')

    u, v = TrialFunction(V), TestFunction(V)
    a = dot(grad(u), grad(v)) * dx
    f = Constant(-i)
    L = f * v * dx

    u = Function(V)
    solve(a == L, u, bc)
    File(mesh.mpi_comm(), 'foo%g.xml' % i) << u

# ----------------------------------------------------------------------

if __name__ == "__main__":
    poisson_solver(0)  # JIT compile everything first

    pool = multiprocessing.Pool(processes=4)
    pool.map(poisson_solver, range(10))
    pool.close()
3 Likes

Hi, thank you for answer! As I understand it in this example we don’t share dolfin objects between processes, as result it works correctly without exceptions. Is there any way to create the same objects (mesh, function space etc.) in advance and run in separate processes only different parts (constant creation and solve execution)? Or it’s impossible because dolfin objects can’t be pickled and we can’t pass them on to the process?

3 Likes

I’m also trying to do exactly this.

I also have the same question.

Similar question. Anyone had any progress with it?

Hi guys @Andre_Gesing, @robert, @urbainvaes.
The only way I’ve come up with is to save objects to files on disk and read it in each process.
In this way I avoided pickle serialisation and deserialisation.

Patrick Farrell’s defcon project could perhaps provide a guide? defcon splits the MPI communicator so multiple DefconWorkers handle “parallel serialised” computation for the purpose of deflated continuation algorithms.