What is the best way to solve several times with the same lhs and different rhs

Hi, I’m working on a linear elasticity problem using FEniCS DOLFIN.
I have to solve the same model several times with a series of different loading conditions, like
solve(a == L0, u0, bc)
solve(a == L1, u1, bc)

solve(a == Li, ui, bc)
In my best knowledge, a single inverse is enough to solve them.
For example, (in terms of MATLAB) us = K\[f0, f1, …, fi, …]
(or in Python, us = spsolve(K, np.c_[f0, f1, …, fi, …]))
How can I do the same thing with DOLFIN?
What is the best implementation of solving the same model with a series of loads?

Thanks :slight_smile:

Hi

I would try something like this (assuming that boundary conditions are always the same):

# First, you assemble your matrix
A = assemble(a, tensor = A)
[bcs.apply(A) for bcs in bc]

# Then your rhs vector b1
b1 = assemble(L1, tensor = b1)
[bcs.apply(b1) for bcs in bc]

# Define your solution variable
u1 = Function(V)

# Solve the equation
solve(A, u1.vector(), b1)

# Repeat the procedure for case 2
b2 = assemble(L2, tensor = b2)
[bcs.apply(b2) for bcs in bc]
u2 = Function(V)
solve(A, u2.vector(), b2)

## and so on...

Of course, you can place variables in lists and then loop over them.

2 Likes

Thx!
I just tried this but the elapsed time was almost the same.
So I decided to do as follows:

from scipy.sparse import csr_matrix
from scipy.sparse.linalg import spsolve
# Assemble matrix and vector
A = PETScMatrix()
assemble(a, tensor=A)
b = PETScVector()
assemble(L, tensor=b)
# PETSc object to matrix or vector
A_mat = as_backend_type(A).mat()
b_vec = as_backend_type(b).vec()
# Build sparse matrix and array
A_sparse = csr_matrix(A_mat.getValuesCSR()[::-1], shape=A_mat.size)
b_array = b_vec.array
# Solve using spsolve
u_array = spsolve(A_sparse, b_array)

Using like above, I can also do

B = np.c_[b0, b1, ..., bn]
U = spsolve(A, B)

I’ll share the results after test this. :slight_smile:

Hi @minsikseo, just reading this question from the side, my guess your approach using scipy won’t scale as well to a large number of processors (or large number of DOFs). scipy doesn’t have MPI parallelism, though it might have thread parallelism via BLAS.

Would be interesting if you could include assess parallel performance (scaling tests) as part of your tests.

Hi there @dparsons @bmf, I just tested solving Poisson’s equation and the results are as bellow:

There was some overhead for implying boundary conditions and I used Lagrangian multiplier method (it definitely grows the size of matrix).

The code is probably a mess because I was in a hurry. Anyhow, as @dparsons said, it turns out that FEniCS outperforms Scipy. But what if multiple load vectors? The case of

\mathbf{KU=F}

where \mathbf{F} and \mathbf{U} are \it m vectors with \it n DOFs, i.e., \it \left(n,m\right) matrix, and \mathbf{K} is a \it \left(n,n\right) matrix.

Do you guys have any idea to solve multiple equations with the same model and boundary conditions at a time?

Any comments, thanks!

Please understand that FEniCS does not provide implementations of linear algebra solvers. It interfaces with external linear algebra libraries, e.g. PETSc (which in turn provides interfaces to other linear algebra libraries, e.g. mumps, UMFPACK).

If you want to store the LU (or similar) factorisation of a matrix, you can do so through the PETSc interface by indicating you want to reuse the preconditioner.

That way you can reuse the factorisation in your solution of a system where you only reassemble the load vector. Performance gains will be far more noticeable for discretisations of vector systems, e.g. linear elasticity and Stokes.

Also understand that each time you call the syntactic sugar solve() function you’re creating and destroying a KSP object. This is grossly inefficient if you want to compute multiple solutions of a system where the operator does not change.

Hi @nate, your comment was vary helpful.
I tested

solver = PETScKrylovSolver(method, preconditioner)
solver.set_reuse_preconditioner(True)
solver.parameters['nonzero_initial_guess'] = True

resulting almost twice faster than before, i.e., \it \alpha = 0.5.
So reusing preconditioner, I got \it \left(1 + \left(n-1\right)\alpha\right) O\left(N\right) where \it \alpha <1, which is faster than \it n O\left(N\right).

But… I’m still coming up with that a single factorization is much faster than multiple factorization even if they reuse the preconditioning information, i.e., \it O\left(N\right) \ll \left(1 + \left(n - 1\right)\alpha\right)O\left(N\right).

Meanwhile, I found MatMatSolve in PETSc, and .matSolve in petsc4py, so I tried

# A is a dolfin.PETScMatrix of size (N,N)
# X and B are dolfin.PETScMatrix of size (N,n)
A.mat().matSolve(X.mat(), B.mat())

but I got error code 73.

Any idea, plz… :sob: