Unstable (irreproducible) optimization result

Hello all,

I’m solving Time distributed control problem of heat equation, where I have some special term in functional. This term penalizes violation of the state contraint. For instance for state contraints u-u_{max} \geqslant 0 this term is \gamma(u-u_{max} + \sqrt{(u-u_{max})^2 + \theta})^2 . In deterministic case all works great. But I consider stochastic case where some parameters of PDE are random variables \xi, and in order to introduce state contraints I have to consider chance (CC) constraints of the form Pr\{u(\xi)-u_{max} \geqslant 0\} \geqslant \alpha. These CC I can approximate with the following function:
\Theta(\tau, \xi) = \frac{1+m_1*\tau}{1+m_2*\tau*e^{\frac{1}{\tau}(u(\xi)-u_{max})}} So that E[\Theta(\tau, \xi)] \leqslant 1 - \alpha . It all comes together in term I’m adding to functional: \gamma(\Theta(\tau) -(1 - \alpha) + \sqrt{(\Theta(\tau) - (1 - \alpha))^2 + \theta})^2 . (assuming I took only one sample from monte carlo simmulation). I won’t go into details any further, I just notice, that I have to solve this problem multiple times, each time decreasing \tau: \tau_{i+1} = \tau_i/10.

I have the following issue:

Expression e^{\frac{1}{\tau}(u-u_{max})} takes too high values for the python float data type with certain \tau. If I just try to calcualte this expression by \tau \geqslant 0.01 everything is alright, if \tau \leqslant 0.001 I have overflowerror, because if u-u_{max} =1 then it has e^{1000}, that can’t be computed. Talking about optimization, I can see the same: by \tau \geqslant 0.01 everything is alright, but by \tau \leqslant 0.001 at the first iteration projected gradient is zero and optimization doesn’t go further.

I tried to rescale the functional in order to increase the gradient, I tried of different L-BFGS_B tolerances (ftol, gtol), different step sizes (eps), I also tried TNC Algorithms, but projected gradient is always zero at the first iteration by \tau \leqslant 0.001.

I see two possible solutions:

  1. catching overflow exceptions, and setting \Theta to zero in this cases (because if overflow occurs, e^{\frac{1}{\tau}(u-u_{max})} \approx \infty \Rightarrow \Theta(\tau) = 0
  2. using decimal data type

But I don’t get any exceptions during optimization and I can’t get decimals work with fenics.

So I made minimal working example to ask about this issue. I just put control bounds and additional functional term into Time distributed control of heat equation demo, and suddenly it has worked by \tau \leqslant 0.001. (projected gradient is note zero at the first iteration). I adjusted optimization parameters a bit, and got to the right solution (near desired value and CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL).

But when I just run the code again, literally changing nothing, I don’t get the same result (algorithm converges too early CONVERGENCE: REL_REDUCTION_OF_F_<=_FACTR*EPSMCH). I’ve reached good result multiple times, making small changes of the optimization parameters, but I could never reproduce it, optimization routine stops too early.
I have three questions:

  1. What could make my minimal working example so unstable? Usually I see convergence to the same solution, running code multiple times.
  2. What could be reason for projected gradient = 0 at the first iteration in my working code (I can laso provide it if needed) by \tau \leqslant 0.001.
  3. And how can one overcome these issues?

MWE:

from fenics import *
from fenics_adjoint import *
from collections import OrderedDict
import numpy as np
import itertools
set_log_active(False)

nu = Constant(1e-5)
data = Constant(2)
alpha = Constant(1e-3)
c_max = 30
c_min = 0

nx = ny = 20
mesh = UnitSquareMesh(nx, ny)
V = FunctionSpace(mesh, "CG", 1)
dt = Constant(0.1)
T = 2

e = 2.71828

# Penalization parameters
u_max = 2.05
gamma = 100
tetha = 0.0016

# Chance contraint approximation parameters 
m_1 = 1.0
m_2 = 0.4
tau = 1e-02
alpha = 0.9

ctrls = OrderedDict()
t = float(dt)
while t <= T:
    ctrls[t] = Function(V)
    t += float(dt)

def solve_heat(ctrls):
    u = TrialFunction(V)
    v = TestFunction(V)

    f = Function(V, name="source")
    u_0 = Function(V, name="solution")
    d = Function(V, name="data")

    F = ( (u - u_0)/dt*v + nu*inner(grad(u), grad(v)) - f*v)*dx
    a, L = lhs(F), rhs(F)
    bc = DirichletBC(V, 0, "on_boundary")

    t = float(dt)
    
    Theta = - 1 + alpha + (1+m_1*tau)/(1+m_2*tau*e**(1/tau*(-u_0 + u_max)))   
    j = (0.5*float(dt)*assemble((u_0 - d)**2*dx) 
         + gamma*0.125*float(dt)*assemble((Theta + ((Theta)**2 + tetha)**0.5)**2*dx(1)))

    while t <= T:
        f.assign(ctrls[t])
        data.t = t
        d.assign(interpolate(data, V))
        solve(a == L, u_0, bc)

        # Implement a trapezoidal rule 
        if t > T - float(dt):
           weight = 0.5
        else:
           weight = 1
        
        Theta_1 = - 1 + alpha + (1+m_1*tau)/(1+m_2*tau*e**(1/tau*(-u_0 + u_max)))
        j += (weight*float(dt)*assemble((u_0 - d)**2*dx) 
              + gamma*0.125*float(dt)*assemble((Theta_1 + ((Theta_1)**2 + tetha)**0.5)**2*dx(1)))
        t += float(dt)

    return u_0, d, j

u, d, j = solve_heat(ctrls)

alpha = Constant(1e-3)
regularisation = alpha/2*sum([1/dt*(fb-fa)**2*dx for fb, fa in
    zip(list(ctrls.values())[1:], list(ctrls.values())[:-1])])

J = j + assemble(regularisation)
m = [Control(c) for c in ctrls.values()]

lower = [interpolate(Constant(c_min), V) for i in m]
upper = [interpolate(Constant(c_max), V) for i in m]


rf = ReducedFunctional(J, m)
opt_ctrls = minimize(rf, method = "L-BFGS-B", options={"maxiter": 50, 
                                                       'ftol': 2e-08, 
                                                       'eps': 1e-02}, bounds = (lower,upper))

Good optimization results in MWE usually have functional value about 0.66 and look like this at each time step:
download

Thank you in advance for your time.

Running this with the latest dolfin-adjoint docker image I get the following results:

RUNNING THE L-BFGS-B CODE

           * * *

Machine precision = 2.220D-16
 N =         8379     M =           10

At X0      8379 variables are exactly at the bounds

At iterate    0    f=  7.40000D+00    |proj g|=  1.85655D-03

At iterate    1    f=  7.39169D+00    |proj g|=  1.85541D-03

At iterate    2    f=  1.31389D+00    |proj g|=  4.01674D-04

At iterate    3    f=  1.20286D+00    |proj g|=  2.88730D-04

At iterate    4    f=  7.66881D-01    |proj g|=  3.65013D-04

At iterate    5    f=  7.16631D-01    |proj g|=  2.32975D-04

At iterate    6    f=  6.93558D-01    |proj g|=  1.29216D-04

At iterate    7    f=  6.71833D-01    |proj g|=  8.17512D-05

At iterate    8    f=  6.63748D-01    |proj g|=  4.60229D-05

At iterate    9    f=  6.61681D-01    |proj g|=  2.47224D-05

At iterate   10    f=  6.61480D-01    |proj g|=  2.15650D-05

At iterate   11    f=  6.61231D-01    |proj g|=  1.19549D-05

At iterate   12    f=  6.61188D-01    |proj g|=  1.13170D-05

At iterate   13    f=  6.61085D-01    |proj g|=  1.13835D-05

At iterate   14    f=  6.61019D-01    |proj g|=  9.18566D-06

           * * *

Tit   = total number of iterations
Tnf   = total number of function evaluations
Tnint = total number of segments explored during Cauchy searches
Skip  = number of BFGS updates skipped
Nact  = number of active bounds at final generalized Cauchy point
Projg = norm of the final projected gradient
F     = final function value

           * * *

   N    Tit     Tnf  Tnint  Skip  Nact     Projg        F
 8379     14     16   6527     0  6144   9.186D-06   6.610D-01
  F =  0.66101874628316715     

CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL    

Are these the same you obtain?

1 Like

I got this result once when I was writing this post. And it’s basically the good one. But now, after restarting the system, creating the new file and copying the code from my post I can’t obtain it any more.
This what I get on linux subsytem for windows and conda build of fenics:

Machine precision = 2.220D-16
 N =         8379     M =           10

At X0      8379 variables are exactly at the bounds

At iterate    0    f=  7.40000D+00    |proj g|=  1.85655D-03

At iterate    1    f=  7.36683D+00    |proj g|=  1.85197D-03
  Positive dir derivative in projection
  Using the backtracking step

At iterate    2    f=  1.47276D+00    |proj g|=  6.73053D-04
  Positive dir derivative in projection
  Using the backtracking step

At iterate    3    f=  1.47246D+00    |proj g|=  6.75444D-04
  Positive dir derivative in projection
  Using the backtracking step

At iterate    4    f=  1.38970D+00    |proj g|=  4.91495D-04
  Positive dir derivative in projection
  Using the backtracking step

At iterate    5    f=  1.37063D+00    |proj g|=  4.43336D-04
  Positive dir derivative in projection
  Using the backtracking step

At iterate    6    f=  1.29714D+00    |proj g|=  3.98291D-04
  Positive dir derivative in projection
  Using the backtracking step

At iterate    7    f=  1.28975D+00    |proj g|=  3.49910D-04
  Positive dir derivative in projection
  Using the backtracking step

At iterate    8    f=  1.17127D+00    |proj g|=  4.17008D-04
  Positive dir derivative in projection
  Using the backtracking step

At iterate    9    f=  1.19664D+00    |proj g|=  7.96797D-04

           * * *

Tit   = total number of iterations
Tnf   = total number of function evaluations
Tnint = total number of segments explored during Cauchy searches
Skip  = number of BFGS updates skipped
Nact  = number of active bounds at final generalized Cauchy point
Projg = norm of the final projected gradient
F     = final function value

           * * *

   N    Tit     Tnf  Tnint  Skip  Nact     Projg        F
 8379      9     15     10     0     6   7.968D-04   1.197D+00
  F =   1.1966423159129533

CONVERGENCE: REL_REDUCTION_OF_F_<=_FACTR*EPSMCH

Do you have the same result rerunning the code?

I had no issues when re-running the code.

1 Like

Ok, I see. Then I’ll try to update fenics or to use docker image.

Thank you very much for your help!

Try using quay.io/dolfinadjoint/pyadjoint:latest

1 Like

Do you mean I should pull the whole fenicsproject by

  1. curl -s https://get.fenicsproject.org | bash

and then run container of the latest build?

  1. fenicsproject run quay.io/dolfinadjoint/pyadjoint:latest

Doing so on the first step in comand line I got:

It appears that Docker is not installed on your system.

Though I’ve instaled docker desktop and it works fine.

REPOSITORY                        TAG                 IMAGE ID            CREATED             SIZE
hello-world                       latest              bf756fb1ae65        7 months ago        13.3kB

In Powershell I got:

cmdlet Invoke-WebRequest at command pipeline position 1
Supply values for the following parameters:
Uri:

After that I tried to run container of Jupyter notebook manually by:

docker run --name notebook -w /home/fenics -v C:\Users\ruvo0863\Desktop\Dockerbiuldoffenics:/home/fenics/shared -d -p 127.0.0.1:8890:8890 quay.io/dolfinadjoint/pyadjoint:latest 'jupyter-notebook --ip=0.0.0.0'

Here I changed the port, because 8888 and 8889 (by local Jupyter) are already occupied.

I got token:

996f4c6d237dc686e2b5e9d187b5a694f94c2197d9b2f52c90d72401c080703b

But I couldn’t run jupyter in borwser by

http://localhost:8890/?token=996f4c6d237dc686e2b5e9d187b5a694f94c2197d9b2f52c90d72401c080703b

Because I got “This site can’t be reached” localhost refused to conect

As I understood from https://fenics-containers.readthedocs.io/en/latest/jupyter.html I might be able to see this container working, but if I execute docker ps I get nothing, and by docker ps -a I can see that conatainer is stopped:

CONTAINER ID        IMAGE                                    COMMAND                  CREATED             STATUS                      PORTS               NAMES
996f4c6d237d        quay.io/dolfinadjoint/pyadjoint:latest   "/sbin/my_init --qui…"   33 seconds ago      Exited (1) 16 seconds ago                       notebook

Also when I execute docker logs 996f4c6d237d I get:

--ip=0.0.0.0': -c: line 0: unexpected EOF while looking for matching `''
--ip=0.0.0.0': -c: line 1: syntax error: unexpected end of file

Do you have any ideas what the problem is?

Could you try running the following in powershell:

docker run -ti -v  ${PWD}:/home/fenics/shared -w /home/fenics/shared/ --rm quay.io/dolfinadjoint/pyadjoint:latest

This should spawn a general docker container (terminal only). Adding additional flags for jupyter should be possible, see for instance: https://stackoverflow.com/questions/43884981/unable-to-connect-localhost-in-docker .

1 Like

Thank you for response!

I tried this comand before, and it works, I get fenics terminal. And I can execute file located in PWD. But I can’t run jupyter notebook from there:

fenics@8f05d2fcb68d:~/shared$ jupyter notebook
[I 14:27:25.649 NotebookApp] Writing notebook server cookie secret to /home/fenics/.local/share/jupyter/runtime/notebook_cookie_secret
Traceback (most recent call last):
  File "/usr/local/bin/jupyter-notebook", line 8, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.6/dist-packages/jupyter_core/application.py", line 268, in launch_instance
    return super(JupyterApp, cls).launch_instance(argv=argv, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/traitlets/config/application.py", line 663, in launch_instance
    app.initialize(argv)
  File "<decorator-gen-7>", line 2, in initialize
  File "/usr/local/lib/python3.6/dist-packages/traitlets/config/application.py", line 87, in catch_config_error
    return method(app, *args, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/notebook/notebookapp.py", line 1720, in initialize
    self.init_webapp()
  File "/usr/local/lib/python3.6/dist-packages/notebook/notebookapp.py", line 1482, in init_webapp
    self.http_server.listen(port, self.ip)
  File "/usr/local/lib/python3.6/dist-packages/tornado/tcpserver.py", line 151, in listen
    sockets = bind_sockets(port, address=address)
  File "/usr/local/lib/python3.6/dist-packages/tornado/netutil.py", line 174, in bind_sockets
    sock.bind(sockaddr)
OSError: [Errno 99] Cannot assign requested address

What do you mean by:

Adding additional flags for jupyter should be possible

to add flags to docker run -ti -v ... command? I haven’t found any referense to that under the link, you attached. Could you give any example?

But from that link I understood, that problem is in container, that can’t be properly started due to messsages from log (docker logs), docker ps and docker network inspect commands. I also can reproduce error message from log, If I don’t use flag -d to detached container in

docker run --name notebook -w /home/fenics -v C:\Users\ruvo0863\Desktop\Dockerbiuldoffenics:/home/fenics/shared -d -p 127.0.0.1:8890:8890 quay.io/dolfinadjoint/pyadjoint:latest 'jupyter-notebook --ip=0.0.0.0'

--ip=0.0.0.0': -c: line 0: unexpected EOF while looking for matching `''
--ip=0.0.0.0': -c: line 1: syntax error: unexpected end of file

Therefore, I can’t have problems with ports, because the container is not even running.

Using the following commands, I can run docker on port 8890 (on linux):

docker run --name notebook8890 -w /home/fenics/shared -v $(pwd):/home/fenics/shared -d -p 127.0.0.1:8890:8890 quay.io/dolfinadjoint/pyadjoint:latest 'jupyter-notebook --ip=0.0.0.0'
jupyter notebook --port 8890

Note: If I do not detach the notebook I cannot open the jupyter notebook with the provided link in the terminal

1 Like

Thank you very much, for your attemps to help me.

I’m running all stuff on Windows.

The first command makes container running.

docker ps
CONTAINER ID        IMAGE                                    COMMAND                  CREATED             STATUS              PORTS                      NAMES
0be4427e7833        quay.io/dolfinadjoint/pyadjoint:latest   "/sbin/my_init --qui…"   6 seconds ago       Up 6 seconds        127.0.0.1:8890->8890/tcp   notebook8890

But the second command gives:

 jupyter notebook --port 8890
[I 15:23:12.717 NotebookApp] The port 8890 is already in use, trying another port.
[I 15:23:12.721 NotebookApp] Serving notebooks from local directory: C:\Users\ruvo0863
[I 15:23:12.721 NotebookApp] Jupyter Notebook 6.1.1 is running at:
[I 15:23:12.721 NotebookApp] http://localhost:8891/?token=f36fbf911db2df16da677b44d859fe96cd83b8eda018629e
[I 15:23:12.722 NotebookApp]  or http://127.0.0.1:8891/?token=f36fbf911db2df16da677b44d859fe96cd83b8eda018629e
[I 15:23:12.722 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[C 15:23:13.012 NotebookApp]

    To access the notebook, open this file in a browser:
        file:///C:/Users/ruvo0863/AppData/Roaming/jupyter/runtime/nbserver-1796-open.html
    Or copy and paste one of these URLs:
        http://localhost:8891/?token=f36fbf911db2df16da677b44d859fe96cd83b8eda018629e
     or http://127.0.0.1:8891/?token=f36fbf911db2df16da677b44d859fe96cd83b8eda018629e

It runs jupyter, installed locally (not from container, cuz I can’t import fenics).

From log I can see the following:

docker logs 0be4427e7833
[I 12:52:03.864 NotebookApp] Writing notebook server cookie secret to /home/fenics/.local/share/jupyter/runtime/notebook_cookie_secret
[I 12:52:04.153 NotebookApp] JupyterLab extension loaded from /usr/local/lib/python3.6/dist-packages/jupyterlab
[I 12:52:04.153 NotebookApp] JupyterLab application directory is /usr/local/share/jupyter/lab
[I 12:52:04.157 NotebookApp] Serving notebooks from local directory: /home/fenics/shared
[I 12:52:04.157 NotebookApp] The Jupyter Notebook is running at:
[I 12:52:04.158 NotebookApp] http://0be4427e7833:8888/?token=e584fb56e8da0692d9294ffa8bf915a8db1e0a774a934fec
[I 12:52:04.158 NotebookApp]  or http://127.0.0.1:8888/?token=e584fb56e8da0692d9294ffa8bf915a8db1e0a774a934fec
[I 12:52:04.158 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[W 12:52:04.164 NotebookApp] No web browser found: could not locate runnable browser.
[C 12:52:04.164 NotebookApp]

    To access the notebook, open this file in a browser:
        file:///home/fenics/.local/share/jupyter/runtime/nbserver-14-open.html
    Or copy and paste one of these URLs:
        http://0be4427e7833:8888/?token=e584fb56e8da0692d9294ffa8bf915a8db1e0a774a934fec
     or http://127.0.0.1:8888/?token=e584fb56e8da0692d9294ffa8bf915a8db1e0a774a934fec

But neither of this links works, also with port 8890.

I followed some instruction from stackoverflow post you attached and tried to use ipv4 adress from docker network inspect

 "Containers": {
            "0be4427e7833158c5d47d6166ff19406886260974bdca62d46a6805c581673e4": {
                "Name": "notebook8890",
                "EndpointID": "6c315092a64de43f0a59bcbe68298ddf4d846cdac2a43914efaa792da2c308b9",
                "MacAddress": "02:42:ac:11:00:02",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            }

So I tried:
http://172.17.0.2:8888/?token=e584fb56e8da0692d9294ffa8bf915a8db1e0a774a934fec
and
http://172.17.0.2:8890/?token=e584fb56e8da0692d9294ffa8bf915a8db1e0a774a934fec
Nothing works.

Then I tried everything I tried before on other PC also on Windows, and nothing has worked.

If you do not detach your container, and run the following command:

docker run --name notebook8890 -p=8890:8888 -w /home/fenics/shared -v $(pwd):/home/fenics/shared quay.io/dolfinadjoint/pyadjoint:latest 'jupyter-notebook --port 8888 --ip=0.0.0.0'

and go to:

http://127.0.0.1:8890/tree

what do you get?

2 Likes

It’s working! Also with detached container. Setting fenics container on port 8888 manually, and then binding this port with host port 8890, did the trick. Thank you so much!

But how can I now get the convergence message? After optimization is done, I can’t find it in log. Exectuing docker logs notebook8890 I get only this:

[I 07:55:02.009 NotebookApp] Writing notebook server cookie secret to /home/fenics/.local/share/jupyter/runtime/notebook_cookie_secret
[I 07:55:02.455 NotebookApp] JupyterLab extension loaded from /usr/local/lib/python3.6/dist-packages/jupyterlab
[I 07:55:02.455 NotebookApp] JupyterLab application directory is /usr/local/share/jupyter/lab
[I 07:55:02.459 NotebookApp] Serving notebooks from local directory: /home/fenics/shared
[I 07:55:02.459 NotebookApp] The Jupyter Notebook is running at:
[I 07:55:02.459 NotebookApp] http://baedfb787bed:8888/?token=b274279de845f0dd14440cc514a65a4904b0355e9a9f1ad3
[I 07:55:02.459 NotebookApp]  or http://127.0.0.1:8888/?token=b274279de845f0dd14440cc514a65a4904b0355e9a9f1ad3
[I 07:55:02.459 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[W 07:55:02.464 NotebookApp] No web browser found: could not locate runnable browser.
[C 07:55:02.464 NotebookApp]

    To access the notebook, open this file in a browser:
        file:///home/fenics/.local/share/jupyter/runtime/nbserver-14-open.html
    Or copy and paste one of these URLs:
        http://baedfb787bed:8888/?token=b274279de845f0dd14440cc514a65a4904b0355e9a9f1ad3
     or http://127.0.0.1:8888/?token=b274279de845f0dd14440cc514a65a4904b0355e9a9f1ad3
[W 07:55:59.309 NotebookApp] Clearing invalid/expired login cookie username-127-0-0-1-8890
[W 07:55:59.310 NotebookApp] Clearing invalid/expired login cookie username-127-0-0-1-8890
[I 07:55:59.311 NotebookApp] 302 GET /tree (172.17.0.1) 2.51ms
[I 07:56:10.409 NotebookApp] 302 POST /login?next=%2Ftree (172.17.0.1) 1.19ms
[I 07:56:15.216 NotebookApp] Writing notebook-signing key to /home/fenics/.local/share/jupyter/notebook_secret
[W 07:56:15.217 NotebookApp] Notebook Untitled.ipynb is not trusted
[I 07:56:15.847 NotebookApp] Kernel started: 429997b3-7b97-4e3c-9f36-02fd9afb351b
[I 07:58:15.792 NotebookApp] Saving file at /Untitled.ipynb
[I 08:00:04.415 NotebookApp] 302 GET /?token=b274279de845f0dd14440cc514a65a4904b0355e9a9f1ad3 (172.17.0.1) 1.34ms
[I 08:12:15.771 NotebookApp] Saving file at /Untitled.ipynb
[I 08:14:15.773 NotebookApp] Saving file at /Untitled.ipynb
[I 08:16:15.787 NotebookApp] Saving file at /Untitled.ipynb

I got somehow log with convergence message a few times by the command docker logs notebook8890, opening other notebooks and running code in the same container. But it doesn’t work when I run a new container. I also don’t see the logic in the actions I’ve taken to make the log work. And now I can’t make log working again.

Do you have any ideas how to fix this?

This is a jupyter notebook issue, which I do not know how to fix. One thing that I’ve seen is that when you clear output and restart the kernel of a notebook, it does flush the output to the logs.

2 Likes

I see. Appreciate your help!

Interestingly, the only thing that doesn’t get into the log is the output of L-BFGS-B solver. Everything else normaly gets in the log. I tried different solver options, it doesn’t help too. I have also found out, that this issue is reproducible on other PCs. So the following might help someone.

I haven’t been able to resolve it, but you were right about restarting the kernel. It returns missing output. I came up with saving optimization result (opt_ctrls) in HDF5 file before restarting the kernel and reading it afterwards:

#Save to the file
OptCtrlsFile = HDF5File(MPI.comm_world,"opt_ctrls.h5","w")

for i, k in zip(opt_ctrls, [str(i) for i in range(19)]):
    OptCtrlsFile.write(i, "/opt_ctrl_" + k)
    
OptCtrlsFile.close()

# Read from the file 
opt_ctrls_new = []

for i in range(19):
    opt_ctrls_new.append(Function(V))  
    
OptCtrlsFile = HDF5File(MPI.comm_world,"opt_ctrls.h5","r")

for i,k in zip(range(19), [str(i) for i in range(19)]):
    OptCtrlsFile.read(opt_ctrls_new[i], "/opt_ctrl_" + k)
    
OptCtrlsFile.close()

19 is the length of opt_ctrls

Based on MWE above I found out that the only source of the initial problem is the subdomain. In the original problem I have two subdomains, one inside the other, and the forcing term acts only on central one. If I set the forcing term to the both domains, It should’t get any difference, but it does.

Adding the follwing code to the MWE above

class Obstacle(SubDomain):
    def inside(self, x, on_boundary):
        return (between(x[1], (0.3, 0.7)) and between(x[0], (0.3, 0.7)))
    
obstacle = Obstacle()

domains = MeshFunction("size_t", mesh, mesh.topology().dim())
domains.set_all(0)
obstacle.mark(domains, 1)

dx = dx(subdomain_data = domains)

and changing the weak formulation according to subdomains:

F = ((u - u_0)/dt*v + nu*inner(grad(u), grad(v)) - f*v)*(dx(0) + dx(1))

always makes projected gradient equal to zero at the first intteration, regardless of the solvers or its parameters.

   N    Tit     Tnf  Tnint  Skip  Nact     Projg        F
 8379      0      1      0     0     0   0.000D+00   7.400D+00
  F =   7.4002373630857372

CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL

It’s a bit confusing, bacause without the term in functional, which I described in the first post above, everything works fine, but with this term everything works except the subdomains.

Am I doing something wrong introducing these subdomains?
What else can I try?

As you have not supplied your whole code, this is not reproducable.
I used the following code:

from fenics import *
from fenics_adjoint import *
from collections import OrderedDict
import numpy as np
import itertools
set_log_active(False)

nu = Constant(1e-5)
data = Constant(2)
alpha = Constant(1e-3)
c_max = 30
c_min = 0

nx = ny = 20
mesh = UnitSquareMesh(nx, ny)
V = FunctionSpace(mesh, "CG", 1)
dt = Constant(0.1)
T = 2

e = 2.71828

# Penalization parameters
u_max = 2.05
gamma = 100
tetha = 0.0016

# Chance contraint approximation parameters
m_1 = 1.0
m_2 = 0.4
tau = 1e-02
alpha = 0.9

class Obstacle(SubDomain):
    def inside(self, x, on_boundary):
        return (between(x[1], (0.3, 0.7)) and between(x[0], (0.3, 0.7)))

obstacle = Obstacle()

domains = MeshFunction("size_t", mesh, mesh.topology().dim())
domains.set_all(0)
obstacle.mark(domains, 1)

dx = dx(subdomain_data = domains)

ctrls = OrderedDict()
t = float(dt)
while t <= T:
    ctrls[t] = Function(V)
    t += float(dt)

def solve_heat(ctrls):
    u = TrialFunction(V)
    v = TestFunction(V)

    f = Function(V, name="source")
    u_0 = Function(V, name="solution")
    d = Function(V, name="data")

    #F = ( (u - u_0)/dt*v + nu*inner(grad(u), grad(v)) - f*v)*dx
    F = ((u - u_0)/dt*v + nu*inner(grad(u), grad(v)) - f*v)*(dx(0) + dx(1))
    a, L = lhs(F), rhs(F)
    bc = DirichletBC(V, 0, "on_boundary")

    t = float(dt)

    Theta = - 1 + alpha + (1+m_1*tau)/(1+m_2*tau*e**(1/tau*(-u_0 + u_max)))
    j = (0.5*float(dt)*assemble((u_0 - d)**2*dx)
         + gamma*0.125*float(dt)*assemble((Theta + ((Theta)**2 + tetha)**0.5)**2*dx(1)))

    while t <= T:
        f.assign(ctrls[t])
        data.t = t
        d.assign(interpolate(data, V))
        solve(a == L, u_0, bc)

        # Implement a trapezoidal rule
        if t > T - float(dt):
           weight = 0.5
        else:
           weight = 1

        Theta_1 = - 1 + alpha + (1+m_1*tau)/(1+m_2*tau*e**(1/tau*(-u_0 + u_max)))
        j += (weight*float(dt)*assemble((u_0 - d)**2*dx)
              + gamma*0.125*float(dt)*assemble((Theta_1 + ((Theta_1)**2 + tetha)**0.5)**2*dx(1)))
        t += float(dt)

    return u_0, d, j

u, d, j = solve_heat(ctrls)

alpha = Constant(1e-3)
regularisation = alpha/2*sum([1/dt*(fb-fa)**2*dx for fb, fa in
    zip(list(ctrls.values())[1:], list(ctrls.values())[:-1])])

J = j + assemble(regularisation)
m = [Control(c) for c in ctrls.values()]

lower = [interpolate(Constant(c_min), V) for i in m]
upper = [interpolate(Constant(c_max), V) for i in m]


rf = ReducedFunctional(J, m)
opt_ctrls = minimize(rf, method = "L-BFGS-B", options={"maxiter": 50,
                                                       'ftol': 2e-08,
                                                       'eps': 1e-02}, bounds = (lower,upper))

and get a non-zero output:

At iterate    0    f=  7.40024D+00    |proj g|=  1.85655D-03

At iterate    1    f=  7.39193D+00    |proj g|=  1.85541D-03

At iterate    2    f=  7.28443D+00    |proj g|=  1.89884D-03

At iterate    3    f=  1.59951D+00    |proj g|=  3.26446D-03

At iterate    4    f=  1.43350D+00    |proj g|=  9.60750D-04

At iterate    5    f=  1.37454D+00    |proj g|=  8.48970D-04

At iterate    6    f=  1.31398D+00    |proj g|=  1.02086D-02

At iterate    7    f=  1.27734D+00    |proj g|=  9.57158D-03

At iterate    8    f=  1.24365D+00    |proj g|=  8.81373D-03

At iterate    9    f=  1.24084D+00    |proj g|=  9.09307D-03

At iterate   10    f=  1.20161D+00    |proj g|=  6.45796D-03

At iterate   11    f=  1.17242D+00    |proj g|=  9.41562D-03

At iterate   12    f=  1.12304D+00    |proj g|=  1.81692D-02

At iterate   13    f=  1.10619D+00    |proj g|=  6.44892D-04

At iterate   14    f=  1.08800D+00    |proj g|=  6.61071D-04

At iterate   15    f=  1.07750D+00    |proj g|=  1.12310D-02

At iterate   16    f=  1.06787D+00    |proj g|=  2.67491D-02

At iterate   17    f=  1.06774D+00    |proj g|=  2.89199D-02

At iterate   18    f=  1.06631D+00    |proj g|=  3.77494D-02

At iterate   19    f=  1.06062D+00    |proj g|=  4.49648D-02

At iterate   20    f=  1.05775D+00    |proj g|=  5.16067D-02
  Positive dir derivative in projection 
  Using the backtracking step 

At iterate   21    f=  1.04326D+00    |proj g|=  2.86823D-02

At iterate   22    f=  1.03478D+00    |proj g|=  3.45108D-02

At iterate   23    f=  1.01515D+00    |proj g|=  4.24320D-02

At iterate   24    f=  9.80850D-01    |proj g|=  1.19378D-02

At iterate   25    f=  9.72497D-01    |proj g|=  4.92028D-03

At iterate   26    f=  9.66242D-01    |proj g|=  1.70678D-03

At iterate   27    f=  9.56382D-01    |proj g|=  1.15933D-03

At iterate   28    f=  9.26029D-01    |proj g|=  2.43307D-02

At iterate   29    f=  9.08273D-01    |proj g|=  2.75379D-03

At iterate   30    f=  9.06012D-01    |proj g|=  1.45683D-03

At iterate   31    f=  9.01124D-01    |proj g|=  1.08737D-02

At iterate   32    f=  8.95921D-01    |proj g|=  7.33296D-04

At iterate   33    f=  8.92746D-01    |proj g|=  6.09863D-04

At iterate   34    f=  8.80558D-01    |proj g|=  8.26862D-03

At iterate   35    f=  8.75667D-01    |proj g|=  1.25054D-02

At iterate   36    f=  8.69578D-01    |proj g|=  1.62612D-02

At iterate   37    f=  8.62537D-01    |proj g|=  3.04263D-02

At iterate   38    f=  8.46271D-01    |proj g|=  2.11370D-02

At iterate   39    f=  8.33560D-01    |proj g|=  1.35980D-02

At iterate   40    f=  8.27262D-01    |proj g|=  3.06617D-03

At iterate   41    f=  8.23616D-01    |proj g|=  5.45386D-03

At iterate   42    f=  8.20895D-01    |proj g|=  1.18826D-02

At iterate   43    f=  8.16378D-01    |proj g|=  1.82901D-02

At iterate   44    f=  8.13180D-01    |proj g|=  2.01514D-02

At iterate   45    f=  8.02364D-01    |proj g|=  6.22221D-03

At iterate   46    f=  8.00511D-01    |proj g|=  3.71477D-03

At iterate   47    f=  7.97678D-01    |proj g|=  1.01342D-03

At iterate   48    f=  7.93267D-01    |proj g|=  6.82997D-04

At iterate   49    f=  7.85135D-01    |proj g|=  2.99683D-03

At iterate   50    f=  7.84508D-01    |proj g|=  3.42491D-03

           * * *

Tit   = total number of iterations
Tnf   = total number of function evaluations
Tnint = total number of segments explored during Cauchy searches
Skip  = number of BFGS updates skipped
Nact  = number of active bounds at final generalized Cauchy point
Projg = norm of the final projected gradient
F     = final function value

           * * *

   N    Tit     Tnf  Tnint  Skip  Nact     Projg        F
 8379     50     92    829     0  2443   3.425D-03   7.845D-01
  F =  0.78450830436810637     

STOP: TOTAL NO. of ITERATIONS REACHED LIMIT    

I’m sorry. I should have changed the parameter tau. Please try with tau = 1e-03 or lower. This issue always appears at low tau values (tau <= 1e-03).

It might have something to do with Overflowexception, as I desscribed before. But it’s weird, that without subdomains even by tau about 1e-06 everithing is fine.