We observe that the left hand side of the system, the matrix does not change from one time step to another, thus we only need to assemble it once.
is stated. However, as far as I understand, the matrix is indeed inverted in every time step. But if we care about efficiency, we see that one should have to do this only once before the time loop.
PETSc caches the factorization, and as long as the matrix is not modified, it reuses the factorization.
You can check this by timing the solve command.
for i in range(num_steps):
t += dt
# Update the right hand side reusing the initial vector
with b.localForm() as loc_b:
loc_b.set(0)
assemble_vector(b, linear_form)
# Apply Dirichlet boundary condition to the vector
apply_lifting(b, [bilinear_form], [[bc]])
b.ghostUpdate(addv=PETSc.InsertMode.ADD_VALUES,
mode=PETSc.ScatterMode.REVERSE)
set_bc(b, [bc])
# Solve linear problem
start = time.perf_counter()
solver.solve(b, uh.x.petsc_vec)
end = time.perf_counter()
print(i, f"{end-start:.2e}s")
uh.x.scatter_forward()
# Update solution at previous time step (u_n)
u_n.x.array[:] = uh.x.array
# Write solution to file
xdmf.write_function(uh, t)
I understand that this caching-behavior is linked to the choice of direct (factorization-based) solvers. Doy you know of any petsc4py-compatible ways to combine benefits of iterative solvers and this idea of only computing a factorization once?
I think you can set it explicitly through KSPSetReusePreconditioner — PETSc 3.22.0 documentation. However, my experience is this makes a terrible preconditioner if you modify the underlying matrix. You might be better off using incomplete LU factorisation, but this all depends on the problem you’re trying to solve.