I wish to compute the derivatives of variational form to be used for backpropagation. Is it possible to covert the TestFunction type to Function from fenics_adjoint. Below is the code that prints the 2 types which I would want to be the same as that of strong residual.
from fenics import *
from fenics_adjoint import *
mesh = UnitSquareMesh(5, 5)
V = FunctionSpace(mesh,"CG" ,1 )
u = Function(V)
v = TestFunction(V)
wr = assemble(v*dx)
print(f"Type of weak residual = {type(wr[0])}")
sr = assemble(u*dx)
print(f"Type of strong residual = {type(sr)}")
Output:
Type of weak residual = <class 'float'>
Type of strong residual = <class 'pyadjoint.adjfloat.AdjFloat'>
Thanks for the prompt response. However, my objective is not to replace the weak form but simply convert the type of the values it returns. The TestFunction is a set of compactly supported shape functions each of which I would like to differentiate. The fact that ufl.replace returns a single function like u instead of the set of shape functions v confuses me
I am also not sure what you want to do here, and how dolfin_adjoint/fenics_adjoint is fitting into the picture. Especially as you have not specified what parameter you want to differentiate with respect to (in fenics_adjoint terms; what is the Control parameter), and what problem are you solving.
This is my best guess as to what you want to obtain
from fenics import *
from fenics_adjoint import *
mesh = UnitSquareMesh(5, 5)
V = FunctionSpace(mesh,"CG" ,1 )
u = Function(V)
wr = assemble(u*dx)
J = ReducedFunctional(wr, Control(u))
dJdu = J.derivative()
print(dJdu.vector().get_local())
I need to overload the petsc-vector as I have to further differentiate the weak_residual1 for neural network training. As only the adjoint variable has a gradient variable, I need it in the adjoint form.
There is no efficient way of doing this at the moment.
You could for example handle each basis function separately, assembling one component of the vector at a time:
weak_residual = []
for i in range(V.dim()):
v_i = Function(V)
v_i.vector()[i] = 1
weak_residual.append(assemble(v_i*dx))
Now each entry of the weak_residual will be an overloaded float (AdjFloat).
Sadly, this is quite slow if you have a lot of dofs.
There is no implementation for adjointing a rank 1 form in the master branch, but the actual derivative code isn’t very complex and can easily be extended from the current assembly adjoint code.
The only issue is the overloading of the generic vector (return value).
There is a branch in the pyadjoint repository that implements an overloaded GenericVector. The branch was made for experimental use, so it only supports some basic operations with GenericVector.
Anyway, since the assembly implementation is rather easy I pushed a commit to the branch which you can checkout here: