How to avoid using a high-dimensional tensor space?

I am solving a system of ordinary differential equations. The dimension of the system is variable, and I need to be able to do high dimension (a few hundred). My code works when the dimension L is reasonably small, around 50. But if L is around 100 I get a segmentation fault error when compiling the TensorFunctionSpace. Below is a MWE.

import numpy as np
import dolfin as df

L = 100
mesh = df.IntervalMesh(1000, 0, 1001)             
TFS = df.TensorFunctionSpace(mesh, 'Lagrange', 1, shape=(L,L))  
VFS = df.VectorFunctionSpace(mesh, 'Lagrange', 1, dim=L)
    
mat = np.random.rand(1001*L*L).reshape((1001, L, L))
mat_func = df.Function(TFS)
mat_func.vector()[:] = np.ravel(mat)

v = df.TestFunction(VFS)
u = df.TrialFunction(VFS)
LHS = (df.inner(df.grad(u), df.grad(v)) + df.dot(df.dot(mat_func, u), v) ) * df.dx                                         
    
w = df.Function(VFS)
df.solve(LHS == 0, w)

mat is actually given by a numpy array. Below is the error.

/tmp/tmp1jko8lty/ffc_element_0ea1b1872d8c05f63b973cf15ba23bec266f6987.cpp: In member function 'virtual void ffc_element_0ea1b1872d8c05f63b973cf15ba23bec266f6987_finite_element_main::evaluate_basis_derivatives(std::size_t, std::size_t, double*, const double*, const double*, int, const ufc::coordinate_mapping*) const':
/tmp/tmp1jko8lty/ffc_element_0ea1b1872d8c05f63b973cf15ba23bec266f6987.cpp:1499402:0: note: -Wmisleading-indentation is disabled from this point onwards, since column-tracking was disabled due to the size of the code/headers
             static const double coefficients0[2] = { 0.7071067811865475, 0.4082482904638631 };
 
/tmp/tmp1jko8lty/ffc_element_0ea1b1872d8c05f63b973cf15ba23bec266f6987.cpp: In function 'virtual void ffc_element_0ea1b1872d8c05f63b973cf15ba23bec266f6987_finite_element_main::evaluate_reference_basis_derivatives(double*, std::size_t, std::size_t, const double*) const':
/tmp/tmp1jko8lty/ffc_element_0ea1b1872d8c05f63b973cf15ba23bec266f6987.cpp:40145:6: internal compiler error: Segmentation fault
 void ffc_element_0ea1b1872d8c05f63b973cf15ba23bec266f6987_finite_element_main::evaluate_reference_basis_derivatives(double * reference_values,
      ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
mmap: Cannot allocate memory

Any ideas for how to fix the above error? If not, then any ideas of ways to implement the matrix value assignment and multiplication without creating the TensorFunctionSpace? I will post the traceback below for reference as well.

DijitsoError                              Traceback (most recent call last)
/mnt/c/Users/leebs/Dropbox/MPL/infer-fenics-timetest.py in <module>
    420 
    421 if __name__ == '__main__':
--> 422     main(sys.argv[1:])
    423 
    424 

/mnt/c/Users/leebs/Dropbox/MPL/infer-fenics-timetest.py in main(args)
    281         print("time to define vector space:", vector_space_time - mesh_time)
    282 
--> 283     TFS = df.TensorFunctionSpace(mesh, 'Lagrange', 1, shape=(L,L))  # The tensor function space for the covariance matrix
    284 
    285     if timed==2:

~/miniconda3/envs/fenicsproject/lib/python3.8/site-packages/dolfin/function/functionspace.py in TensorFunctionSpace(mesh, family, degree, shape, symmetry, constrained_domain, restriction)
    232 
    233     # Return (Py)DOLFIN FunctionSpace
--> 234     return FunctionSpace(mesh, element, constrained_domain=constrained_domain)

~/miniconda3/envs/fenicsproject/lib/python3.8/site-packages/dolfin/function/functionspace.py in __init__(self, *args, **kwargs)
     29                 pass
     30             elif len(args) == 2:
---> 31                 self._init_from_ufl(*args, **kwargs)
     32             else:
     33                 self._init_convenience(*args, **kwargs)

~/miniconda3/envs/fenicsproject/lib/python3.8/site-packages/dolfin/function/functionspace.py in _init_from_ufl(self, mesh, element, constrained_domain)
     40 
     41         # Compile dofmap and element
---> 42         ufc_element, ufc_dofmap = ffc_jit(element, form_compiler_parameters=None,
     43                                           mpi_comm=mesh.mpi_comm())
     44         ufc_element = cpp.fem.make_ufc_finite_element(ufc_element)

~/miniconda3/envs/fenicsproject/lib/python3.8/site-packages/dolfin/jit/jit.py in mpi_jit(*args, **kwargs)
     45         # Just call JIT compiler when running in serial
     46         if MPI.size(mpi_comm) == 1:
---> 47             return local_jit(*args, **kwargs)
     48 
     49         # Default status (0 == ok, 1 == fail)

~/miniconda3/envs/fenicsproject/lib/python3.8/site-packages/dolfin/jit/jit.py in ffc_jit(ufl_form, form_compiler_parameters)
     95     p.update(dict(parameters["form_compiler"]))
     96     p.update(form_compiler_parameters or {})
---> 97     return ffc.jit(ufl_form, parameters=p)
     98 
     99 

~/miniconda3/envs/fenicsproject/lib/python3.8/site-packages/ffc/jitcompiler.py in jit(ufl_object, parameters, indirect)
    215 
    216     # Inspect cache and generate+build if necessary
--> 217     module = jit_build(ufl_object, module_name, parameters)
    218 
    219     # Raise exception on failure to build or import module

~/miniconda3/envs/fenicsproject/lib/python3.8/site-packages/ffc/jitcompiler.py in jit_build(ufl_object, module_name, parameters)
    128 
    129     # Carry out jit compilation, calling jit_generate only if needed
--> 130     module, signature = dijitso.jit(jitable=ufl_object,
    131                                     name=module_name,
    132                                     params=params,

~/miniconda3/envs/fenicsproject/lib/python3.8/site-packages/dijitso/jit.py in jit(jitable, name, params, generate, send, receive, wait)
    214     if err_info:
    215         # TODO: Parse output to find error(s) for better error messages
--> 216         raise DijitsoError("Dijitso JIT compilation failed, see '%s' for details"
    217                            % err_info['fail_dir'], err_info)
    218 

DijitsoError: Dijitso JIT compilation failed

I think the answer here would depend on knowing more about the structure of mat_func in the target application. (I assume that filling it with random numbers is just a placeholder for demonstrating the problem.) Is it really a dense matrix at each point? Is it spatially-constant? If it’s spatially-varying, is there a closed-form expression? Is there a known formula for the ij-th entry? If mat_func is dense and varies spatially in some non-closed-form manner, the problem may be difficult to avoid, but if it’s sparse, spatially-constant, and has a formula for the ij-th entry, then the summation corresponding to

df.dot(df.dot(mat_func, u), v) )

could be built up programmatically using Python for loops, skipping over zero entries.

You are correct, filling it with numbers was just a placeholder. It is indeed a dense, spatially varying matrix with no closed form for its entries. The entries of the matrix are the output of a simulation.