Memory leak in dolfin from GhostValues update?

Recently I observed that my code was out of memory when I am using the command apply(‘insert’) for a function after updating its values. Is there any way to free this allocated memory? Here I present a brief code example where the memory usage has been increased from 4014740 to 9005872 (KiB Mem from top command) after 180000 iterations employing 16 processors in mpirun command.

from dolfin import*
import numpy as np
import sys


mesh = UnitCubeMesh(64, 64, 64)
print('coor shape = ', mesh.coordinates().shape)
V = VectorFunctionSpace(mesh, 'CG', 1)
u = Function(V)

u_array = u.vector().get_local()
# self._uv = self._mu.vector().vec()
mpiRank = MPI.rank(MPI.comm_world)
I =0
while(True):
    u_array =np.random.randn(u_array.shape[0])
    u.vector().set_local(u_array)
    u.vector().apply('insert')
    if i%1000 == 0 and mpiRank ==0:
        print(i, 'th iteration')
        sys.stdout.flush()
    i+=1

What version of PETSc are you using?

In their later releases (3.18 and onwards), they have changed how the garbage collection in Python works.

This happens in conda with petsc 3.15.5 version and in 2019.2.0.dev0 dolfin-version in ubuntu 18.04 where I do not know the petsc version exaclty, I found version 3.7.7 under /usr/lib.
Is there any command that I can free this memory or can I combine dolfin with newer version of petsc? Many thanks

I couldn’t reproduce this issue with 8 processes using

docker run -ti --network=host -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix -v $(pwd):/root/shared -w /root/shared --rm  ghcr.io/scientificcomputing/fenics-gmsh:2023-03-01a

As you are using a quite outdated OS (Ubuntu 18.04 EOL – keep your fleet of devices up and running | Ubuntu). The python version can also factor into this.

One thing I would recommend is to stor u.vector() outside the loop, and call vec = u.vector() and v.set_local(...) and vec.apply(...)

What happens if you remove the apply command on your system, does the memory not increase?

Yes indeed using the same docker image the memory leak is fixed.

I already did the recommendations to store u.vector() and set vet =u.vector() outside the loop in my script but for simplicity I added them under the while loop on the above example. The memory leak is consequence of the vec.apply(…), when I remove this part everything is fine with the memory, but other issues arise in my code. Thank you for the response I imagined that there is an issue with garbage collector so the simplest solutions is to upgrade the used release.