Sparsity Pattern for Mixed and Vector FunctionSpaces

Hi,
I’ve got a short question regarding the sparsity pattern related to MixedFunctionSpaces and VectorFunctionSpaces. I noticed that fenics can have a lot of overhead when assembling matrices for these kind of spaces, as illustrated in the following example (I chose a VectorFunctionSpace for simplicity, the behavior is exactly the same for a MixedFunctionSpace).

from fenics import *
import numpy as np

mesh = UnitSquareMesh(10, 10)
dx = Measure('dx', mesh)

V1 = FunctionSpace(mesh, 'CG', 1)
u = TrialFunction(V1)
v = TestFunction(V1)
A = assemble(u*v*dx)
nnz_a_actual = A.nnz()
A_pet = as_backend_type(A).mat()
nnz_a_needed = np.nonzero(A_pet[:,:])[0].size

V2 = VectorFunctionSpace(mesh, 'CG', 1)
phi = TrialFunction(V2)
psi = TestFunction(V2)
B = assemble(phi[0]*psi[0]*dx)
nnz_b_actual = B.nnz()
B_pet = as_backend_type(B).mat()
nnz_b_needed = np.nonzero(B_pet[:,:])[0].size

You can observe that we have 761 nonzeros for the first form (which is also the amount allocated), and for the second form we also only have 761 nonzero entries (it is the same form), but have allocated memory for 3044 nonzero entries.

As far as I understand it, the sparsity pattern is build in the FunctionSpace, independently of the actual couplings between its components, which enables fenics to handle any kind of coupling (which is totally reasonable).

Is there any way to condense the resulting matrices, such that the memory overhead is not as huge, when I am not considering fully coupled systems? Or alternatively: Is it possible to compute / build a sparsity pattern based on the actual weak form I am using, or to specify manually which “blocks” of the matrix I need when investigating a mixed problem?

You can consider the test which builds its own sparsity pattern

Thank you very much. I’ll take a look at this.