However, if I define my problem using a TrialFunction (magnetostatic) instead of a Function (adjoint)
J = Function(DG0)
mu = Function(DG0)
lmbda = Function(P1)
Az = TrialFunction(P1)
v = TestFunction(P1)
R = (1./mu) * dot( grad(Az), grad(v) ) * dx - J*v*dx
I get an error when I compute the adjoint
dCostdmu = form(action(adjoint(derivative(R, mu)), lmbda))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/path/2/ufl/formoperators.py", line 176, in adjoint
return compute_form_adjoint(form, reordered_arguments)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/path/2/ufl/algorithms/formtransformations.py", line 458, in compute_form_adjoint
arguments = form.arguments()
^^^^^^^^^^^^^^^^
File "/path/2/ufl/form.py", line 102, in arguments
self._analyze_form_arguments()
File "/path/2/ufl/form.py", line 618, in _analyze_form_arguments
arguments, coefficients = extract_arguments_and_coefficients(self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/path/2/ufl/algorithms/analysis.py", line 218, in extract_arguments_and_coefficients
raise ValueError(
ValueError: Found different Arguments with same number and part.
Did you combine test or trial functions from different spaces?
The Arguments found are:
v_1
v_1
v_0
If I use Function, it works but then I cannot solve the forward problem R using a LinearSolver(magnetostatic), but I have to define a NonlinearProblem (adjoint), despite the fact that R is linear.
Can you please explain me how to proceed such that I can solve my (linear) forward problem using a linear solver instead of a Newton scheme?
Thanks for the info, I understand now the use of Function vs TrialFunction.
I guess the mathematical sense of derivative(fun, var, trial), but what puzzles me is that the tutorial about optimal control seems to behave the same w/ or w/o the third argument in derivative(...).
If the argument is omitted, a new Argument is created in the same space as the coefficient, with argument number one higher than the highest one in the form.