Recently I observed that my code was out of memory when I am using the command apply(‘insert’) for a function after updating its values. Is there any way to free this allocated memory? Here I present a brief code example where the memory usage has been increased from 4014740 to 9005872 (KiB Mem from top command) after 180000 iterations employing 16 processors in mpirun command.
from dolfin import*
import numpy as np
import sys
mesh = UnitCubeMesh(64, 64, 64)
print('coor shape = ', mesh.coordinates().shape)
V = VectorFunctionSpace(mesh, 'CG', 1)
u = Function(V)
u_array = u.vector().get_local()
# self._uv = self._mu.vector().vec()
mpiRank = MPI.rank(MPI.comm_world)
I =0
while(True):
u_array =np.random.randn(u_array.shape[0])
u.vector().set_local(u_array)
u.vector().apply('insert')
if i%1000 == 0 and mpiRank ==0:
print(i, 'th iteration')
sys.stdout.flush()
i+=1
This happens in conda with petsc 3.15.5 version and in 2019.2.0.dev0 dolfin-version in ubuntu 18.04 where I do not know the petsc version exaclty, I found version 3.7.7 under /usr/lib.
Is there any command that I can free this memory or can I combine dolfin with newer version of petsc? Many thanks
Yes indeed using the same docker image the memory leak is fixed.
I already did the recommendations to store u.vector() and set vet =u.vector() outside the loop in my script but for simplicity I added them under the while loop on the above example. The memory leak is consequence of the vec.apply(…), when I remove this part everything is fine with the memory, but other issues arise in my code. Thank you for the response I imagined that there is an issue with garbage collector so the simplest solutions is to upgrade the used release.