# Is it possible to covert TestFunction to adjoint Function?

I wish to compute the derivatives of variational form to be used for backpropagation. Is it possible to covert the `TestFunction` type to `Function` from `fenics_adjoint`. Below is the code that prints the 2 types which I would want to be the same as that of strong residual.

``````from fenics import *

mesh = UnitSquareMesh(5, 5)
V = FunctionSpace(mesh,"CG" ,1 )

u = Function(V)
v = TestFunction(V)
wr = assemble(v*dx)
print(f"Type of weak residual = {type(wr)}")
sr = assemble(u*dx)
print(f"Type of strong residual = {type(sr)}")
``````

Output:

``````Type of weak residual = <class 'float'>
``````

1 Like

I’m not sure what your goal is, but does the following help?

``````import ufl
from fenics import *

mesh = UnitSquareMesh(5, 5)
V = FunctionSpace(mesh,"CG" ,1 )

u = Function(V)
v = TestFunction(V)
wr_form = v*dx
wr = assemble(wr_form)
print(f"Type of weak residual = {type(wr)}")
sr_form = u*dx
sr = assemble(sr_form)
print(f"Type of strong residual = {type(sr)}")

replaced_form = ufl.replace(wr_form, {v: u})
replaced = assemble(replaced_form)
print(f"Type of replaced residual = {type(replaced)}")
``````

Thanks for the prompt response. However, my objective is not to replace the weak form but simply convert the type of the values it returns. The `TestFunction` is a set of compactly supported shape functions each of which I would like to differentiate. The fact that `ufl.replace` returns a single function like `u` instead of the set of shape functions `v` confuses me

I am also not sure what you want to do here, and how `dolfin_adjoint`/`fenics_adjoint` is fitting into the picture. Especially as you have not specified what parameter you want to differentiate with respect to (in fenics_adjoint terms; what is the `Control` parameter), and what problem are you solving.

This is my best guess as to what you want to obtain

``````from fenics import *

mesh = UnitSquareMesh(5, 5)
V = FunctionSpace(mesh,"CG" ,1 )

u = Function(V)
wr = assemble(u*dx)
J = ReducedFunctional(wr, Control(u))
dJdu = J.derivative()
print(dJdu.vector().get_local())
``````

)*dx

``````Lsupg1 = f*v*dx + inner(f, tau*dot(b, grad(v)))*dx

weak_residual1 = assemble(asupg1-Lsupg1)

print(f"Type of weak residual1:{type(weak_residual1)}")
``````

Type of weak residual1:<class ‘dolfin.cpp.la.Vector’>

Not straight out of the box. It would help if you can explain why you want to overload the petsc-vector, and what you want to use it to.

I need to overload the petsc-vector as I have to further differentiate the weak_residual1 for neural network training. As only the adjoint variable has a gradient variable, I need it in the adjoint form.

There is no efficient way of doing this at the moment.

You could for example handle each basis function separately, assembling one component of the vector at a time:

``````weak_residual = []
for i in range(V.dim()):
v_i = Function(V)
v_i.vector()[i] = 1
weak_residual.append(assemble(v_i*dx))
``````

Now each entry of the weak_residual will be an overloaded float (AdjFloat).
Sadly, this is quite slow if you have a lot of dofs.

There is no implementation for adjointing a rank 1 form in the master branch, but the actual derivative code isn’t very complex and can easily be extended from the current assembly adjoint code.
The only issue is the overloading of the generic vector (return value).

There is a branch in the pyadjoint repository that implements an overloaded GenericVector. The branch was made for experimental use, so it only supports some basic operations with GenericVector.
Anyway, since the assembly implementation is rather easy I pushed a commit to the branch which you can checkout here:

1 Like

Thanks a lot. It solved my problem.