Hi!
I am trying to solve a problem which essentially involves solving N problems which on the weak form is given by
where a_i = \alpha_i A(u,v) +\beta_i B(u,v) and i=1,...,N. Here A and B are bilinear forms and \alpha_i and \beta_i are constants depending on the index. I then want to sum all solutions togehter.
I can solve this fine in the “naive” way, by in each loop creating a new problem and then using solve
, but this is slow. I realized that I can save a lot of time by just assembling the matrices related to A and B once and then reusing them, right? My question here is: What other way can I use to speed this up? Probably by not using the naive solve but by some other method, but which one? How do I decide? I would like to be able to parallielize somehow.
My other problem is to do with the right hand side f. It is given as f= \sum_{i=1}^N \sum_{j=1}^i c_{ij} f_{ij}, where c_{ij} are constants and f_{ij} are functions, so essentially a Fourier series expansion. I need to be able to change the c_{ij} s. What I have done so far is just that I assemble the vector related to (f,v) each time I update the coefficients, but this does not seem to be the most efficient. Would it be more efficient to compute the vectors b_i = (f,v) and store them, and then each time I update the coefficients just build b= \sum b_i, but that it is also costly! Any tips here for a good implementation which will not cost me a fortune?