Calculating the Jacobian matrix using dolfin-adjoint

Hello everyone,
Some optimization methods such as Levenberg–Marquardt need to calculate the Jacobian matrix of a vector-valued function. I would like to ask if the dolfin-adjoint can calculate the Jacobian matrix or it only calculates the gradient of the objective function (the objective function takes the form of sum of squared errors).

I do not use dolfin-adjoint myself, but by looking at the API reference, it appears there is no function to construct the jacobian matrix. Intuitively, I would say that assembling the Jacobian is not scalable as your problem size grows. For instance, if your forward solution has N dofs and your residual vector is of size M similar to N, then the jacobian is a dense matrix of size N\times M which can be too large to store for large problems. Also, with a vector valued objective function, your adjoint solution is now a matrix of size N \times M and requires M adjoint solves to assemble. For the same reasons, optimizers usually do not store the full hessian but rather a limited memory approximation (e.g. L-BFGS vs BFGS), or only provide routines to evaluate the hessian action on a given vector.
Maybe there are clever tricks to use least-square optimizers in large scale problems but I am not aware of them :grinning:

2 Likes