Hello all,
I have a script in which I solve coupled Navier-Stokes equations with pressure imposed as neumann boundary conditions. This works fine for smaller meshes but uses loads of memory on big problems. On a same computer that could solve meshes with million+ elements using the oasis solver, my script with a 300k element mesh uses too much memory.
This is the script:
from dolfin import *
# More verbose output by SNES
PETScOptions.set("snes_linesearch_monitor", "")
PETScOptions.set("snes_linesearch_type", "bt")
mesh_name = "1mm_diameter_NEW"
mesh = Mesh()
with XDMFFile(mesh_name + ".xdmf") as infile:
infile.read(mesh)
subdomains = MeshFunction("size_t", mesh, mesh.topology().dim())
with XDMFFile(mesh_name + "_subdomains.xdmf") as infile:
infile.read(subdomains)
boundaries = MeshFunction("size_t", mesh, mesh.topology().dim() - 1)
with XDMFFile(mesh_name + "_boundaries.xdmf") as infile:
infile.read(boundaries)
inlet_vein = 1
outlet_vein = 2
walls = 3
################################
eta = 1
rho = 1
nu = eta/rho
dt = 0.01
T = .1
V_element = VectorElement("Lagrange", mesh.ufl_cell(), 2)
Q_element = FiniteElement("Lagrange", mesh.ufl_cell(), 1)
W_element = MixedElement(V_element, Q_element) # Taylor-Hood
W = FunctionSpace(mesh, W_element)
vq = TestFunction(W) # Test function in the mixed space
delta_up = TrialFunction(W) # Trial function in the mixed space (XXX Note: for the increment!)
(delta_u, delta_p) = split(delta_up) # Function in each subspace to write the functional (XXX Note: for the increment!)
(v, q) = split( vq) # Test function in each subspace to write the functional
up = Function(W)
(u, p) = split(up)
up_prev = Function(W)
(u_prev, _) = split(up_prev)
n = FacetNormal(mesh)
P_out = Constant(0.)
P_in_v = Constant(0.002)
ds = Measure("ds")(subdomain_data=boundaries)
F = ( inner(u, v)/Constant(dt)*dx # Implit Euler discretization
- inner(u_prev, v)/Constant(dt)*dx # Implit Euler discretization
+ nu*inner(grad(u), grad(v))*dx
+ inner(grad(u)*u, v)*dx
- div(v)*p*dx
+ div(u)*q*dx
+ (P_in_v * inner(v,n) * ds(inlet_vein))
+ (P_out * inner(v,n) * ds(outlet_vein))
)
J = derivative(F, up, delta_up)
walls_bc = DirichletBC(W.sub(0), Constant((0., 0., 0.)), boundaries, walls )
bc = [walls_bc] # We need to pass SNES the boundary conditions for u, not for delta_u
snes_solver_parameters = {"nonlinear_solver": "snes",
"snes_solver": {"linear_solver": "mumps",
"maximum_iterations": 20,
"report": True,
"error_on_nonconvergence": True}}
problem = NonlinearVariationalProblem(F, up, bc, J)
solver = NonlinearVariationalSolver(problem)
solver.parameters.update(snes_solver_parameters)
xdmffile_u = XDMFFile('Coupled/TEST/velocity.xdmf')
xdmffile_p = XDMFFile('Coupled/TEST/pressure.xdmf')
(u, p) = up.split()
K = int(T/dt)
for i in range(1, K):
# Compute the current time
t = i*dt
print("t= ",t)
# Solve the nonlinear problem
solver.solve()
# Store the solution in up_prev
assign(up_prev, up)
# Plot
(u, p) = up.split()
# Save solution to file (XDMF/HDF5)
xdmffile_u.write(u, t)
xdmffile_p.write(p, t)
Since I based it on a template, I must admit I do not quite know the effect of
PETScOptions.set("snes_linesearch_monitor", "")
PETScOptions.set("snes_linesearch_type", "bt")
and
snes_solver_parameters = {“nonlinear_solver”: “snes”,
“snes_solver”: {“linear_solver”: “mumps”,
“maximum_iterations”: 20,
“report”: True, “error_on_nonconvergence”: True}}
These were set like this and I choose to keep these parameters like this but I do not know if maybe they are causing this huge memory usage.
I was able to solve 2D problems and a 30k element 3D problem, but with the 3D tube, I already saw that the memory usage was very high almost 20GB CPU, so with 300k elements or more the computer cannot simulate my problem.
Can anyone maybe help me with this problem?
Thanks in advance,
Jack