Adding constant value for expression

Hello everyone, I want to add a constant initial expression for X below. However, when I add it does not show the correct result. It works when I use spatial coordinates like X = interpolate(Expression(“x[0]+x[1]”, degree=1, X0=X0), W) . Thank you!

from dolfin import *
from dolfin_adjoint import *
import moola 
import matplotlib.pyplot as plt
import numpy as np

# Set log level

# Optimization options for the form compiler
parameters["form_compiler"]["cpp_optimize"] = True
ffc_options = {"optimize": True,
               "eliminate_zeros": True,
               "precompute_basis_const": True,
               "precompute_ip_const": True}

n = 64
mesh = UnitSquareMesh(n, n)
V = FunctionSpace(mesh, "CG", 2)
W = FunctionSpace(mesh, "DG", 0)
V_vector = VectorFunctionSpace(mesh, "CG", 2)
def boundary_bot(x, on_boundary):
    tol = 1E-7
    return on_boundary and near(x[1], 0.0, tol)

bc_bot = DirichletBC(V_vector, [0.0,0.0], boundary_bot) 

def boundary_top(x, on_boundary):
    tol = 1E-7
    return on_boundary and near(x[1], 1.0, tol)

max_disp = -0.09
disp = Constant([0.0, max_disp]) 
bc_top = DirichletBC(V_vector, disp, boundary_top)
bcs = [bc_bot, bc_top]

boundary_subdomains = MeshFunction("size_t", mesh, mesh.topology().dim() - 1)
AutoSubDomain(boundary_top).mark(boundary_subdomains, 1)
dss = ds(subdomain_data=boundary_subdomains)

X0 = 0.82
X = interpolate(Expression("X0", degree=1, X0=X0), W)
file = File("X0.pvd");
file << X

eps = Constant(1e-0)
J = assemble((((1/eps) * X ** 2 * (1-X)**2)) * dx )))

control = Control(X)

rf = ReducedFunctional(J, control)

problem = MoolaOptimizationProblem(rf)
f_moola = moola.DolfinPrimalVector(X)
solver = moola.NewtonCG(problem, f_moola, options={'gtol': 1e-3,
                                                   'maxiter': 50,
                                                   'display': 3,
                                                   'ncg_hesstol': 0})

sol = solver.solve()
f_opt = sol['control'].data

file = File("Xtrial.pvd");
file << f_opt

Try the following:

X0 = Constant(0.82)
X = interpolate(X0, W)

Hi, thank you and yes, I tried but my problem is I don’t get a correct answer. My answer only makes sense if I use spatial coordinate

By your answer you mean the final product of the code?
I see you are using X as a control variable from the pyadjoint package. Usually, I would not interpolate it so let X=Constant(0.82) without any interpolation.

In your problem, do you want X to be a dolfin.Function, which initially is a constant (X0), but can vary across the domain?

Or do you want X to be decomposed into X = X0 + Xp where Xp can vary across the domain?

Hi Dokken, I want X to be a dolfin.Function, which initially is a constant (X0), but can vary across the domain. thank you

Your problem is that you are hitting various local extrema, depending on your initial condition. I added some print statements to your code to make it clear that the initial guess is different for each case.

print(f"Initial J: {rf(X)}")
    f"min opt {min(f_opt.vector().get_local())}, max opt {max(f_opt.vector().get_local())}")
print(f"Optimal J {rf(f_opt)}")

If you set:

X0 = 0.83
X_expr = Expression("X0", degree=1, X0=X0)
X = interpolate(X_expr, W)

you get:

iteration = 3:  objective = 1.3256134903310807e-08:     grad_norm = 0.0002303235975668256:       delta_J = 3.998395394672504e-05:
Initial J: 0.019909210000000132
min opt 1.0001151220364821, max opt 1.0001151220364821
Optimal J 1.3256134903310807e-08

while if you set:

X_expr = Expression("x[0]+x[1]", degree=1)
X = interpolate(X_expr, W)

you get to

iteration = 10: objective = 0.0004884059343981214:      grad_norm = 0.0007101492081490739:       delta_J = 1.7945464087309277e-05:
Initial J: 0.23325195908546448
min opt -0.0028063008248741944, max opt 1.0028063008248742
Optimal J 0.0004884059343981214

thank you dokken, when I use a constant initial guess why I am getting min and max opt the same but why it is different when I use spatial coordinates? I want to get different optimals for min and max

As your optimization problem is independent at each degree of freedom, you are simply minimizing the polynomial at each dof. This means that the gradient will be the same at two dofs if they start with the same value.

If you start with different values at different dofs, you end up with distinct gradients.

1 Like