Dolfin-adjoint: ABNORMAL_TERMINATION_IN_LNSRCH

Hello everyone,

I am using Dolfin-adjoint for minimization problem, and I am getting the following error:
ABNORMAL_TERMINATION_IN_LNSRCH

 Line search cannot locate an adequate point after 20 function
  and gradient evaluations.  Previous x, f and g restored.
 Possible causes: 1 error in function or gradient evaluation;
                  2 rounding error dominate computation.

However, when I try to increase maxls to a higher number (e.g., 40), the code still stops and says
Line search cannot locate an adequate point after 20 function and gradient evaluations.
Can anyone please help me with this issue?

I am using the L-BFGS-B method in the moola library. But I can change the method or library if they are causing the problem.

Thank you so much!

You should create a minimal working example reproducing the behavior and termination you refer to. Please try to make the example as minimal as possible.

1 Like

Thank you so much for the reply.
Unfortunately, I couldn’t create a minimal code that needs more than 20 iterations in each line search. But please consider the below example (same as the code in Using Dolfin-adjoint for optimization problem without PDE constraint - #2 by dokken). I have changed the options in this line ( u_opt = minimize(J_hat, method = "L-BFGS-B", options = {"gtol": 1e-6, "ftol": 1e-16, "maxfun": 30000, "maxiter": 30000, "maxls": 40}) )
The problem is that in my real code, although I am using these options, the code stops after trying 20 iterations for line search.

import numpy as np
import matplotlib.pyplot as plt
from dolfin import *
from dolfin_adjoint import *
import moola

n = 10
mesh = RectangleMesh(Point(-1,-1),Point(1,1), n, n)

V = FunctionSpace(mesh, "CG", 1)
u = Function(V)
v = TestFunction(V)
S0 = Constant(1)

bc = DirichletBC(V, 1, "on_boundary")

v = project(u, V, bcs=bc)
J = assemble((0.5*inner(grad(v), grad(v)) - v*S0)*dx)
J_hat = ReducedFunctional(J, Control(u))   
u_opt = minimize(J_hat, method = "L-BFGS-B", options = {"gtol": 1e-6, "ftol": 1e-16, "maxfun": 30000, "maxiter": 30000, "maxls": 40})
J_hat(u_opt)
fileY = File("temp.pvd");
fileY << v.block_variable.saved_output;
print(assemble(v*ds))

The problem is that the code you are showing below works for me.
I cannot help you debug a code without having something that can reproduce the error message.

2 Likes

Thanks. Yes, definitely makes sense. Let me try again to see if I can create a good minimal working example.

1 Like

Hello Mr. Dokken,

I have attached the minimal code that gives me the error for line search. In the below code, I am trying to minimize my function ( (1-|d|)**2*W(y) ) WRT two variables (y, d).

from dolfin import *
from dolfin_adjoint import *
import moola

mesh = RectangleMesh(Point(-1,-1),Point(1,1), 20, 20)
V = VectorFunctionSpace(mesh, "CG", 1)
dy, dd = TrialFunction(V), TrialFunction(V)            
vy, vd  = TestFunction(V), TestFunction(V)             
y, d  = Function(V), Function(V) 

def bottom(x, on_boundary):
    return x[1] < -1 + DOLFIN_EPS and on_boundary
def top(x, on_boundary):
    return x[1] > 1 - DOLFIN_EPS and on_boundary

load1 = Expression(("x[0]","x[1] - t"), t=0.05, degree=1) 
load2 = Expression(("x[0]","x[1] + t"), t=0.05, degree=1)
bcs1_y = DirichletBC(V, load1, bottom)
bcs2_y = DirichletBC(V, load2, top)  
bcs_y = [bcs1_y, bcs2_y]
y_assign = Expression(("x[0]", "x[1]"), degree=1)
y.assign(project(y_assign, V)) #initial guess

bcs1_d = DirichletBC(V, Constant((0.0, 0.0)), bottom)
bcs2_d = DirichletBC(V, Constant((1, 1)), top) 
bcs_d = [bcs1_d, bcs2_d]
d_assign = Constant((0.5, 0.5))
d.assign(project(d_assign, V)) #initial guess

def W(y):
    F = grad(y)                               
    return 0.5*tr(F.T*F) + (det(F)-1)**2 -ln(det(F))   # strain energy density (compressible Mooney-Rivlin)
   
vy, vd = project(y, V, bcs=bcs_y), project(d, V, bcs=bcs_d)
J = assemble( (1-dot(vd,vd)**0.5)**2*W(vy)*dx )    
m1, m2 = Control(y), Control(d)
J_hat = ReducedFunctional(J, [m1, m2])   
m_opt = minimize(J_hat, method = "L-BFGS-B",\
        options = {"gtol": 1e-6, "ftol": 1e-16, "maxfun": 30000, "maxiter": 30000, "maxls": 200}) 
J_hat(m_opt)
y_opt, d_opt = m_opt[0], m_opt[1]

Thank you so much!

Hello Mr. @dokken
I wanted to follow-up on my problem. Also, the version of the libraries that I am using is as below:
fenics-dolfin==2019.2.0.dev0 , dolfin-adjoint==2019.1.0 , moola==0.1.6 , and scipy==1.3.3

Hello Mr. @dokken
Since I am still struggling with this problem, I would be very thankful if you could help me with this issue.

Thank you so much,

I don’t know if you are still struggling with this problem, but the reason you get a problem with your line search is that your functional will return NaN when the determinant of F becomes negative.
The gradient of J will also contain NaNs when this occurs.

You can get around it by trying to force the optimizer back into a valid range for det(F).
Here is one way to do that by catching the NaN values and returning only a penalty term on det(F).

import ufl
import math

vy, vd = project(y, V, bcs=bcs_y), project(d, V, bcs=bcs_d)

# Used in case of negative determinant
functional_toggle = AdjFloat(1.0)
reg_term = -1e+6 * assemble(ufl.Min(0, det(grad(vy)))*dx)
# We need to fetch these later
ft_control = Control(functional_toggle)
reg_control = Control(reg_term)

# Compute functional with the regularization term
J = functional_toggle * assemble( (1-dot(vd,vd)**0.5)**2*W(vy)*dx ) + reg_term

# Define a modified reduced functional class that contains the workaround
class ModifiedReducedFunctional(ReducedFunctional):
    def __init__(self, *args, **kwargs):
        self.functional_toggle = kwargs.pop("functional_toggle")
        self.reg_term = kwargs.pop("reg_term")
        super().__init__(*args, **kwargs)

    def __call__(self, *args, **kwargs):
        # New evaluation, so reset functional_toggle to 1.0:
        self.functional_toggle.update(1.0)

        func_value = super().__call__(*args, **kwargs)

        if math.isnan(func_value):
            # Toggle off functional for potential gradient calculations
            self.functional_toggle.update(0.)
            # Return only the regularization term
            func_value = self.reg_term.tape_value()

            # Print warning so we know if optimizer is actually working inside the valid range.
            print("Warning: functional value is NaN.")

        return func_value

m1, m2 = Control(y), Control(d)
J_hat = ModifiedReducedFunctional(J, [m1, m2], functional_toggle=ft_control, reg_term=reg_control)
2 Likes