Invert matrix only once for time-dep problems

Dear community,

in the tutorial Diffusion of a Gaussian function the sentence

We observe that the left hand side of the system, the matrix does not change from one time step to another, thus we only need to assemble it once.

is stated. However, as far as I understand, the matrix is indeed inverted in every time step. But if we care about efficiency, we see that one should have to do this only once before the time loop.

The problem is

M \partial_t \vec u + A \vec u = \vec f

Crank-Nicolson:

[M + \frac{\Delta t}{2}A ] \vec u^{n+1} = [M - \frac{\Delta t}{2} A] \vec u^{n} + \Delta t \vec f^{n+1/2}

Using LU-factorization, I could imagine factorizing
M + \frac{\Delta t}{2}A = LU

so that I can solve

L\vec w = \vec f^{n+1/2},

and then

U \vec u^{n+1} = \vec w .

That should be faster than the method showed in the tutorial.


Q1: Is this the preferred way for this problem, or should I use another factorization, or even a completely different linear algebra routine?

Q2: How would I implement this in feniscx?

PETSc caches the factorization, and as long as the matrix is not modified, it reuses the factorization.
You can check this by timing the solve command.

for i in range(num_steps):
    t += dt

    # Update the right hand side reusing the initial vector
    with b.localForm() as loc_b:
        loc_b.set(0)
    assemble_vector(b, linear_form)

    # Apply Dirichlet boundary condition to the vector
    apply_lifting(b, [bilinear_form], [[bc]])
    b.ghostUpdate(addv=PETSc.InsertMode.ADD_VALUES,
                  mode=PETSc.ScatterMode.REVERSE)
    set_bc(b, [bc])

    # Solve linear problem
    start = time.perf_counter()
    solver.solve(b, uh.x.petsc_vec)
    end = time.perf_counter()
    print(i, f"{end-start:.2e}s")
    uh.x.scatter_forward()

    # Update solution at previous time step (u_n)
    u_n.x.array[:] = uh.x.array

    # Write solution to file
    xdmf.write_function(uh, t)

result

0 1.28e-02s
1 2.64e-04s
2 2.52e-04s
3 3.18e-04s
4 3.08e-04s
5 2.57e-04s
6 2.41e-04s
...
1 Like

Thanks a lot.

I understand that this caching-behavior is linked to the choice of direct (factorization-based) solvers. Doy you know of any petsc4py-compatible ways to combine benefits of iterative solvers and this idea of only computing a factorization once?

I think you can set it explicitly through KSPSetReusePreconditioner — PETSc 3.22.0 documentation. However, my experience is this makes a terrible preconditioner if you modify the underlying matrix. You might be better off using incomplete LU factorisation, but this all depends on the problem you’re trying to solve.

1 Like

Thanks a lot for your response!