Optimizing values on the boundary of an interior boundary

Hi everyone,

I would like to use dolfin-adjoint to impose zero values on the boundary of a subdomain for the solution of a Poisson problem with Dirichlet boundary conditions.

I managed to construct the cost function (wich the L2 norm on the interior boundary) using the solution found here :
https://fenicsproject.discourse.group/t/integrating-over-a-sphere-within-the-mesh/1314

However, moola’s Newton’s algorithm does not seem to converge. Here is the error a obtain :

Solving linear variational problem.
Newton CG method.
------------------------------
Line search:		 strong_wolfe
Maximum iterations:	 20

Solving linear variational problem.
iteration = 0:	objective = 0.08367483356893896:	grad_norm = 0.022263058063897254:
TEST: 0.0004956437543564606 1.103454568224591e-05
cg_iter = 0	curve = 1.4811129317951133e-06	hesstol = 0
Solving linear variational problem.
-g 0.16586360564337488
Solving linear variational problem.
iteration = 1:	objective = 0.0007430307472515684:	grad_norm = 0.0015273901706104637:	delta_J = 0.08293180282168738:
TEST: 2.332920733277461e-06 3.5632801968213494e-09
cg_iter = 0	curve = -2.1001392755979117e-09	hesstol = 0
Solving linear variational problem.
-g 2.332920733277461e-06
Solving linear variational problem.
Solving linear variational problem.
Solving linear variational problem.
Solving linear variational problem.
Solving linear variational problem.
Solving linear variational problem.
Solving linear variational problem.
Solving linear variational problem.
Solving linear variational problem.
Solving linear variational problem.
Traceback (most recent call last):
  File "mwe.py", line 71, in <module>
    sol = solver.solve()
  File "/Users/fabien/miniconda3/envs/fenics/lib/python3.6/site-packages/pyadjoint/tape.py", line 46, in wrapper
    return function(*args, **kwargs)
  File "/Users/fabien/miniconda3/envs/fenics/lib/python3.6/site-packages/moola/algorithms/newton_cg.py", line 158, in solve
    x, a = self.do_linesearch(objective, x, d)
  File "/Users/fabien/miniconda3/envs/fenics/lib/python3.6/site-packages/moola/algorithms/optimisation_algorithm.py", line 90, in do_linesearch
    alpha = self.linesearch.search(phi, phi_dphi, phi_dphi0)
  File "/Users/fabien/miniconda3/envs/fenics/lib/python3.6/site-packages/moola/linesearch/strong_wolfe.py", line 100, in search
    raise Warning(task)
Warning: Warning: stp = stpmax

And here is a minimal working example :

from dolfin import *
from dolfin_adjoint import *
from mshr import *
import moola
import matplotlib.pyplot as plt

lx = 1.
ly = 1.
xc = .5
yc = .5
rc = .1

n = 50
mesh = UnitSquareMesh(n, n)

class circle(SubDomain):
    def inside(self, x, on_boundary):
        return (x[0]-xc)*(x[0]-xc) + (x[1]-yc)*(x[1]-yc) <= rc*rc
domains = MeshFunction("size_t", mesh, mesh.topology().dim())
domains.set_all(0)
circle().mark(domains, 1)
dx1 = Measure('dx', domain=mesh, subdomain_data=domains)

def circle_integral(f, vols=domains, rad=rc):
    # The circle is marked as 1
    dV = Measure('dx', domain=vols.mesh(), subdomain_data=vols, subdomain_id=1)

    x = SpatialCoordinate(mesh)
    value = assemble(div(dot(f,f)*x)*dV)
    # Take the radius into account
    value /= rad
    
    return value

V = FunctionSpace(mesh, "CG", 1)

u = TrialFunction(V)
v = TestFunction(V)
g = Function(V)


a = inner(grad(u), grad(v))*dx1
L = g*v*dx1(1)

def top(x,on_boundary):
    return on_boundary and near(x[1],1.,1e-14)
def other(x, on_boundary):
    return on_boundary and not(near(x[1],1.,1e-14))
bc1 = DirichletBC(V, 1.0, top)
bc2 = DirichletBC(V, 0.0, other)
bc = [bc1, bc2]

u = Function(V, name='State')
solve(a == L, u, bc)

J = circle_integral(u)

control = Control(g)

rf = ReducedFunctional(J, control)

problem = MoolaOptimizationProblem(rf)
g_moola = moola.DolfinPrimalVector(g)
solver = moola.NewtonCG(problem, g_moola, options={'gtol': 1e-6,
                                                   'maxiter': 20,
                                                   'display': 3,
                                                   'ncg_hesstol': 0})

sol = solver.solve()

with stop_annotating():
    g_opt = sol['control'].data
    g.assign(g_opt)

    solve(a == L, u, bc)

    plt.figure(1)
    plot(g)

    plt.figure(2)
    plot(u)

    plt.show()

Any ideas ?

If you change to a steepest descent algorithm, you will observe that your problem is not well defined:

solver = moola.SteepestDescent(problem, g_moola, 
                                           options={"jtol":1e-9, "gtol":1e-5, 
                                                    "line_search":"armijo", 
                                                    "line_search_options":{'adaptive_stp':True, 
                                                                           'start_stp':1}}, 
                                           ) 

since it yields the following:

...
iteration = 12:	objective = -0.013062675580444981:	grad_norm = 0.0062929026238448755:
Solving linear variational problem.
Solving linear variational problem.
iteration = 13:	objective = -0.13984919741334723:	grad_norm = 0.02273463232825187:
Solving linear variational problem.
Solving linear variational problem.
iteration = 14:	objective = -2.1325987261804156:	grad_norm = 0.17385908281174672:
Solving linear variational problem.
Solving linear variational problem.
Solving linear variational problem.
...
iteration = 35:	objective = -788625339372004.6:	grad_norm = 4499656.034307889:
Solving linear variational problem.
Solving linear variational problem.
iteration = 36:	objective = -4474412053234503.5:	grad_norm = 9451143.463600252:
Solving linear variational problem.
Solving linear variational problem.
Solving linear variational problem.

Thank you for your answer. You are right ! It seems that it is because the cost function can take negative values…
Then, do you know how to define the cost function wich is the L2-norm on the boundary of the subdomain ?
I also tried to construct a mesh for the circle and interpolate (or project) the solution of the Poisson problem on this mesh before integrating on the boundary but dolfin-adjoint crashes.

How about:

J = (circle_integral(u)**2)**0.5 
  
control = Control(g) 
  
rf = ReducedFunctional(J, control) 
problem = MoolaOptimizationProblem(rf) 
g_moola = moola.DolfinPrimalVector(g) 
solver = moola.SteepestDescent(problem, g_moola, 
                                        options={"jtol":1e-9, "gtol":1e-5, 
                                                 "line_search":"armijo", "maxiter":25, 
                                                 "line_search_options":{'adaptive_stp':True, 
                                                                        'start_stp':1}}, 
                                        ) 

Not sure that this is what you would like to have. Would be nice if you could write down the mathematical formulation of your problem. Because it depends if you want the L2 norm of your integral or the L2 norm of the integrand

Sure. Here is the problem I would like to solve :

Let \Omega be the entire square domain and \mathcal{O} a circular domain inside.
I would like to find a control g such that the solution of the Poisson problem :

find u such that
-\Delta u = f + g \chi_{\mathcal{O}} in \Omega,
u = 0 on \partial\Omega,

satisfies the equality : u = 0 on \partial \mathcal{O}.
Note : f is an exterior force and \chi_{\mathcal{O}} is 1 inside \mathcal{O} and 0 otherwise.

In other words, I want to minimize the cost function
J(g) = \int_{\partial \mathcal{O}}{|u|^2},
under the constraint that u is solution of the previous Poisson problem.

However, to do that, I do not want the mesh on \Omega to be conform with the interior boundary \partial \mathcal{O}.

Well, your functional is wrong.

You are integrating over a circle with center in origo.
Therefore your gradient can get negative values, as x!=i_r.
The correct functional is:

    value = assemble(div(dot(f,f)*(x-Constant((xc, yc))))*dV)

However, note that you should not set 'ncg_hesstol': 0, as it will make it almost impossible to pass an iteration test.

You are right, this works well ! Thank you for your help !
Just an other question : Do you know which optimization algorithm should be the fastest for this kind of problem ?

I would suggest trying different solvers on a relatively small problem to check the performance.
You can for instance compare with the moola steepest descent. However, remember to tune tolerances, as well analyzing the output, aka. the functional value at termination, and the norm of the gradient.

1 Like

Thanks ! All your comments were very helpful.