I know this has been raised previously, in, for instance, this post from the older forum, but I wanted to revisit the question of using fenics for integral operators. I’m interested in the eigenvalue problem of the form
where the kernel, k(x,y), is symmetric. I’d like to set this problem up in a way that preserves the self-adjointness of the C operator. I found that the method suggested in the previous post “works”, but produces, for instance, eigenvalues with small imaginary parts that I attribute to the loss of self-adjointness. I’d like to clean that up, if possible.
Hi, how far is the assembled matrix from being symmetric? Is it acceptable to do the eigenvalue analysis with (C+C^T)/2 with? What is the eigenvalue solver you use?
So there was certainly a finite error in the symmetry breaking, and it did resolve itself under mesh refinement. Your suggestion of symmetrizing the matrix is a decent one. The issue is that, effectively, your implementation treats the two arguments in the kernel different; one argument is projected onto the FEM space, while the other argument is interpolated onto it at the nodal values.
I came up with an alternative, which is to treat it via interpolation in both arguments, so that
which induces a matrix C, with C_{i,j} = c(x_i,y_j). Then, you have the symmetric linear system:
M C M \phi= \lambda M \phi
and this provides satisfactory results. This may also be of use in the Fredholm equation that you were originally interestsed.
Two issues I am still less than thrilled about, but may be unavoidable are as follows:
I am using dense NumPy matrices for C because I’m not sure there’s a better option. My kernel does fall off relatively quickly with |x-y|, so maybe if I introduced a cutoff I could justify using a sparse representation.
Since I am using these dense matrix representations, I end up just using the SciPy eigh eigensolver. Does anyone know if the SLEPc dense solvers would be more efficient, were I to set the problem up with them?