# How does the reisz representation of shape derivatives work?

Hi,

I am trying to understand how shape optimization works but am getting confused about how the Reisz representation of the shape derivative is used. For some shape functional
`J = ...`
I understand UFL can compute the shape derivative as

``````import dolfin as dfn
import ufl

mesh = dfn.UnitSquareMesh(10, 10)
W = dfn.VectorFunctionSpace(mesh, 'CG1')

x = ufl.MeshCoordinate(mesh)
dJ = dfn.derivative(J, x)
``````

where `dJ` is the shape derivative.

From reading some past papers (https://arxiv.org/pdf/2001.10058.pdf) my understanding is that `dJ` is a linear functional that needs a Reisz representation (stored in `gradJ` here) in the mesh’s CG1 space so that you can then use that representation to deform the mesh.

My understanding is that the straightforward one is simply using
`dfn.assemble(dJ, tensor=gradJ.vector())`
which I guess corresponds to the Reisz representation with respect to the L2 inner product.

In this paper they use the Reisz representation from the laplacian operator by

``````trial = dfn.TrialFunction(W)
test = dfn.TestFunction(W)
``````

What I’m confused about is whether `gradJ.vector()` is then the vector you would output to an optimization algorithm? Specifically in optimization algorithms for a given shape `x` you would need a function like the below

``````def obj_and_grad(x):
dfn.ALE.move(mesh, x)

obj = dfn.assemble(J)

## use one of the above methods to compute `gradJ`
``````

which returns the objective function value and gradient. When using the laplacian type riesz representation, I’m confused because I think `gradJ` in this case doesn’t correspond to the correct gradient of `obj_and_grad` because of the different inner product in the Riesz representation, i.e:

``````# `obj_and_grad(x+dx)[0] - obj_and_grad(x)[0]`
# would not converge to