Is there memory leak in FEniCS and how to fix it?

I am relatively new FEniCS user. I am using it to solve time dependent problem consisting of a system of 4 differential equations, that is Poisson equation and 3 drift-diffusion equations.
Plan is to extend it to at least 15 differential equations.

Currently, developed program works well and I managed to obtain good results (comparable to the commercial software). However, when I ran the program for a longer time, I have noticed that amount of used memory increases linearly with time. I tried to find the reason for this, so I reduced the code as much as possible, but the problem still remained. Next, I forced garbage collecting, which reduced memory usage a bit, however, it still kept increasing with each iteration.

In order to find the cause for this behavior, I tried to reduce the problem to the simplest case. I tried to run the textbook example i.e. Poisson equation with Dirichlet boundary condition over and over again for numerous iteration (order of million iterations) to check if memory usage increases in that case as well. I used psutil to track memory usage. The code is given below (modified and simplified version of example from the book).

from dolfin import *
import psutil
import os

pid = os.getpid()
memory_info = open('memory info.txt','w')

# Create mesh and define function space
mesh = UnitSquareMesh(6, 4)
V = FunctionSpace(mesh, "Lagrange", 1)
# Define boundary conditions
u0 = Expression("1 + x[0]*x[0] + 2*x[1]*x[1]", degree = 2)

def u0_boundary(x, on_boundary):
    return on_boundary

bc = DirichletBC(V, u0, u0_boundary)

# Define variational problem
u = TrialFunction(V)
v = TestFunction(V)
f = Constant(-6.0)
a = inner(nabla_grad(u), nabla_grad(v))*dx
L = f*v*dx

# Compute solution
u = Function(V)

for i in range(0,1000000):
    solve(a == L, u, bc)

    py = psutil.Process(pid)
    process_memory = str(py.memory_info()[0]/2.0**30)
    memory_info.write(str(i) + '\t'+ process_memory + '\n')

I tested the code on 3 different machines and with 2 different versions of FEniCS:

  1. computing server with Xeon processor and FEniCS version 2018.1.0 on Xubuntu 18.04.2 installed from repository
  2. laptop with Core i7 8550u and FEniCS version 2019.1.0 on Xubuntu 18.04.2 installed from repository
  3. desktop computer with Core i5 3550 and FEniCS version 2019.1.0 on elementaryOS 5.0 installed from Anaconda.

In all three cases there was a memory leak and memory usage increases linearly with time (see figure below). Increase is actually small (for this case couple of MB per iteration), however for large number of iterations and equations, it accumulates rather quickly and becomes noticeable.

I repeated the calculations with different mesh size and some preliminary results show that it does not affect the rate of increase, just the starting memory. Also, by running code with mpirun I noticed that memory usage multiplies with number of used cores.

It is interesting that in a change log for FEniCS 2019.1.0 version, it is mentioned that memory leak is fixed.

Also, I found some bug report on a similar problem from last year that was caused by OpenMPI.
However, in my case memory leak happens even when I run code on a single core.

What can be the reason for this behavior and how to fix it? I would appreciate any information how to overcome this problem.

1 Like

What happens if you reuse the data structures for your linear system rather than constructing a new matrix and vector each iteration with solve()?

I do not observe the memory usage growth that you do.

Thank you for the quick answer. You were right, when I assemble the matrix and vector outside the loop, there is no memory leak.

I guess the problem is that the new matrix and vector are created in each new iteration and the memory is not released after the use. Is this correct?

This solves the problem with Poisson equation with constant source term. However, what should I do when the source term changes with each iteration i.e. when I have time dependent problem?

Sorry for double post, but in a meanwhile I found out that if you use

b = assemble(L, tensor = b)

the memory while be allocated only once, so I tried it and it works. Thank you for pointing out what could be the reason for the problem. Is this possible for NonlinearVariationalSolver as well?

Yes. But I don’t like NonlinearVariationalSolver. It’s syntactic sugar which grossly detracts from the details of the underlying problem. See here for an example of a custom Newton solver instead. Or the Cahn Hilliard demo.

1 Like

Thank you, that solved the problem. There is still small memory increase with every output to vtk files, but it is negligible.