Assembling in parallel with EigenMatrix data type

Hello everyone,

I’m trying to assemble a matrix from a variational formulation into an EigenMatrix instance to be utilized in SLEPcEigenSolver. The following MWE runs without mpirun, but when mpirun is used, MemoryError: std::bad_alloc occurs.

MWE:

from fenics import *

mesh = BoxMesh(Point(0,0,0), Point(1,1,1), 1, 1, 1)
V = FunctionSpace(mesh, "CG", 1)

u = TrialFunction(V)
v = TestFunction(V)
a = inner(nabla_grad(u), nabla_grad(v))*dx

A = EigenMatrix()
assemble(a , tensor = A)

The error when mpirun -n 3 python3 mwe.py:

Traceback (most recent call last):
  File "parallel_mwe.py", line 11, in <module>
    assemble(a , tensor = A)
  File "/usr/local/lib/python3.6/dist-packages/dolfin/fem/assembling.py", line 213, in assemble
    assembler.assemble(tensor, dolfin_form)
MemoryError: std::bad_alloc
Traceback (most recent call last):
  File "parallel_mwe.py", line 11, in <module>
    assemble(a , tensor = A)
  File "/usr/local/lib/python3.6/dist-packages/dolfin/fem/assembling.py", line 213, in assemble
    assembler.assemble(tensor, dolfin_form)
MemoryError: std::bad_alloc
Traceback (most recent call last):
  File "parallel_mwe.py", line 11, in <module>
    assemble(a , tensor = A)
  File "/usr/local/lib/python3.6/dist-packages/dolfin/fem/assembling.py", line 213, in assemble
    assembler.assemble(tensor, dolfin_form)
MemoryError: std::bad_alloc

Working environment:
FEniCS Docker image: 2019.1.0

Sorry for misunderstanding. The above code works in parallel if PETScMatrix is used instead of EigenMatrix:

A = PETScMatrix()