Multiobjective optimization - Store sub-objective values and update weights

Hello everyone,

I am currently running a multi-objective optimization with the IPOPT solver and pyadjoint. My objective function has the following form:
J =j_1 + \alpha j_2 with j_{1/2} the sub-objective function values and alpha the weight associated with the second sub-objective.

I would like to store and retrieve the values of each individual sub-objective j_1 and j_2 throughout the optimization.

I know that I can define some callbacks that are called at every iterations. However, the arguments passed to the method are only the scalar value of the total objective function j and the design variables a… I cannot access the sub-objective values.

Here is a MWE based on the topology optimization example governed by Poisson equation https://dolfin-adjoint-doc.readthedocs.io/en/latest/documentation/poisson-topology/poisson-topology.html

from __future__ import print_function
from dolfin import *
from dolfin_adjoint import *

try:
    from pyadjoint import ipopt  # noqa: F401
except ImportError:
    print("""This example depends on IPOPT and Python ipopt bindings. \
  When compiling IPOPT, make sure to link against HSL, as it \
  is a necessity for practical problems.""")
    raise

parameters["std_out_all_processes"] = False

alpha = Constant(1.0e-8)  # weight for subobjective 2 

n = 250
mesh = UnitSquareMesh(n, n)
A = FunctionSpace(mesh, "CG", 1)  # function space for control
P = FunctionSpace(mesh, "CG", 1)  # function space for solution
Vol = Constant(0.4)      # volume bound on the control

class WestNorth(SubDomain):
    def inside(self, x, on_boundary):
        return (x[0] == 0.0 or x[1] == 1.0) and on_boundary

bc = [DirichletBC(P, 0.0, WestNorth())]
f = interpolate(Constant(1.0e-2), P)  # the source term for the PDE


def forward(a):
    """Solve the forward problem for a given material distribution a(x)."""
    print('Solve residual forward way. ')
    T = Function(P, name="Temperature")
    v = TestFunction(P)
    F = inner(grad(v), (a**Constant(5)) * grad(T)) * dx - f * v * dx
    solve(F == 0, T, bc, solver_parameters={"newton_solver": {"absolute_tolerance": 1.0e-7,"maximum_iterations": 20}})
    return T



def objective_function(a,annotate=False):
    j1 = f * T * dx # sub-objective 1
    j2 = inner(grad(a), grad(a)) * dx # sub-objective 2
    return assemble(j1 + alpha*j2)



if __name__ == "__main__":
    a = interpolate(Vol, A)  # initial guess.
    T = forward(a)  # solve the forward problem once.
    
    # Callback called at each iteration
    total_obj_list = []
    def eval_cb(j, a):
        total_obj_list.append(j)

    J = objective_function(a,annotate=True)
    m = Control(a)
    Jhat = ReducedFunctional(J, m, eval_cb_post=eval_cb)
    
    volume_constraint = UFLInequalityConstraint((Vol - a)*dx, m) # Some constraint

    problem = MinimizationProblem(Jhat, bounds=(0.0, 1.0), constraints=volume_constraint)

    parameters = {"acceptable_tol": 1.0e-3, "maximum_iterations": 100}
    solver = IPOPTSolver(problem, parameters=parameters)
    a_opt = solver.solve()
    
    plot(a_opt)
    print("The total objective throughout the optimization: ", total_obj_list)

Let me know if I misunderstood something or if you have some suggestions on how to achieve this.

I would also like to update the weight \alpha during the optimization. Do you have some suggestions regarding this topic as well?

Thank you for your time

Use Control on each sub-objective and then call Control.tape_value() to see its current value in the tape. For instance:

j1 = assemble(a*dx)
j1_control = Control(j1)
j1_control.tape_value()
1 Like

Thank you very much it works properly. I didn’t know this was possible.
I’m now able to store the sub-objective value at each iteration (j1_control.tape_value() stored when the callback is called).

And regarding the second part of my question: to update the weight.
Ideally, I would like the weight to be updated so as to end up at the end of the optimization with the two sub-objectives of the same magnitude. This can be achieved by changing the weight during the optimization. The weight is updated with the ratio of the sub-objectives of the previous iteration:

iteration i of the optimization: \alpha^i = j_1^{i-1} / j_2^{i-1} with then J^i =j_1^i+\alpha^i*j_2^i

However, using an approach similar to the one suggested above, the method tape_value() permits to retrieve the current value on the tape but not the previous one … Do you have some suggestions on how to proceed?

j1 = assemble(a*dx); j1_control = Control(j1)
j2 = assemble(b*dx); j2_control = Control(j2)

alpha = j1_control.tape_value() / j2_control.tape_value()

J = j1+alpha*j2
Jhat = ReducedFunctional()

Maybe store the value of alpha at the end of the loop?

alpha = 1 # initial value of alpha
while ... # Optimization loop
    #do something with alpha
    alpha = j1_control.tape_value() / j2_control.tape_value() # Update alpha

You might want to look into Placeholder as well as used in this example http://www.dolfin-adjoint.org/en/latest/documentation/mpec/mpec.html