Solving a PDE with a non-linear constraint

Hello, I recently made this post on a PDE with constraint, and I would like to make a much simpler one here so other users can reply more easily.

I need to solve a complex PDE for two unknowns, subject to a constraint. Here I reproduce a minimal example of such a PDE. This example can be solved by solving analytically for the constraint and substituting, of course, but this is not my goal. I am providing this example to test whether Fenics can handle these constrained problems.

Consider a poisson equation in \Omega = [0,1]\times [0,1] for the two unknowns u and v

\nabla^2 (u+v) = f
with the additional constraint
u - v = g,

and BCs

u=u_0 on \partial \Omega.

I can easily solve this in Fenics. However, as soon as I take a nonlinear constraint:

u^2 - v^2 = g,

the solution blows up. Do you have any idea of why ? Here is a snippet of the variational problem, let me know if you need the complete minimal working example, thank you.

bc_u = DirichletBC(fsp.Q.sub(0), fsp.u_exact, rmsh.boundary)

bcs = [bc_u]

# variational functional for the original problem (poisson equation)
F_u = (((fsp.u + fsp.v).dx(i)) * ((fsp.nu_u + fsp.nu_v).dx(i)) + fsp.f * (fsp.nu_u + fsp.nu_v)) * rmsh.dx \
      - bgeo.facet_normal[i] * ((fsp.u + fsp.v).dx(i)) * (fsp.nu_u + fsp.nu_v) * rmsh.ds_r \
      - bgeo.facet_normal[i] * ((fsp.u + fsp.v).dx(i)) * (fsp.nu_u + fsp.nu_v) * rmsh.ds_R

# this imposes the constraint
F_v = ((((fsp.u) ** 2 - (fsp.v) ** 2).dx(i) - fsp.g[i]) * ((fsp.u * fsp.nu_u.dx(i) - fsp.v * fsp.nu_v.dx(i)))) * rmsh.dx

# this enforces the constraint on the boundary
F_N = rpam.parameters['alpha'] / rmsh.r_mesh * ((((fsp.u) ** 2 - (fsp.v) ** 2).dx(i) - fsp.g[i]) * ((fsp.u * fsp.nu_u.dx(i) - fsp.v * fsp.nu_v.dx(i)))) * rmsh.ds

F = (F_u + F_v) + F_N

So you are trying to add a quadratic penalty enforcing the second constraint

If that is the case, how are you actually adding the quadratic penalty to the energy minimization problem, as the usual way to formulate a penalty approach is to start with for instance the Dirichlet-energy

\min_{u} J = \min_{u} \int_\Omega \nabla u \cdot \nabla u~\mathrm{d}x - \int_\Omega f \cdot v ~ \mathrm{d}x

subject to

and add the quadratic penalty to this equation, i.e.

\min_{u} J + P =\min_{u} J + \frac{\alpha}{2}\int_\partial\Omega (u - g)^2~\mathrm{d}s

and then derive the optimality conditions (your residual) from this.
It is not super clear how you would do that in your case, and how you derived your system.

Additionally, your code is not reproducible, and one has to guess what spaces u, v, g, nu_u, nu_v are in.

1 Like

Please see the original post, where I wrote the variational functional

bc_u = DirichletBC(fsp.Q.sub(0), fsp.u_exact, rmsh.boundary)

bcs = [bc_u]

# variational functional for the original problem (poisson equation)
F_u = (((fsp.u + fsp.v).dx(i)) * ((fsp.nu_u + fsp.nu_v).dx(i)) + fsp.f * (fsp.nu_u + fsp.nu_v)) * rmsh.dx \
      - bgeo.facet_normal[i] * ((fsp.u + fsp.v).dx(i)) * (fsp.nu_u + fsp.nu_v) * rmsh.ds_r \
      - bgeo.facet_normal[i] * ((fsp.u + fsp.v).dx(i)) * (fsp.nu_u + fsp.nu_v) * rmsh.ds_R

# this imposes the constraint
F_v = ((((fsp.u) ** 2 - (fsp.v) ** 2).dx(i) - fsp.g[i]) * ((fsp.u * fsp.nu_u.dx(i) - fsp.v * fsp.nu_v.dx(i)))) * rmsh.dx

# this enforces the constraint on the boundary
F_N = rpam.parameters['alpha'] / rmsh.r_mesh * ((((fsp.u) ** 2 - (fsp.v) ** 2).dx(i) - fsp.g[i]) * ((fsp.u * fsp.nu_u.dx(i) - fsp.v * fsp.nu_v.dx(i)))) * rmsh.ds

F = (F_u + F_v) + F_N

The penalty you wrote enforces u=g, not the constraint that I want to implement, which is u^2-v^2=g.

Here is the definition of function spaces:

from fenics import *

import read_parameters_solve as rpam
import load_mesh as lmsh

# define elements and function spaces
P_u = FiniteElement('P', triangle, rpam.parameters['function_space_degree'])
P_v = FiniteElement('P', triangle, rpam.parameters['function_space_degree'])
element = MixedElement([P_u, P_v])
Q = FunctionSpace(lmsh.mesh, element)

Q_u = Q.sub(0).collapse()
Q_v = Q.sub(1).collapse()

# Define functions
psi = Function(Q)
J_psi = TrialFunction(Q)
nu_u, nu_v = TestFunctions(Q)

u_exact = Function(Q_u)
v_exact = Function(Q_v)
laplacian_u_exact = Function(Q_u)

f = Function(Q_u)
g = Function(Q_u)
J_uv = TrialFunction(Q)
u, v = split(psi)

assigner = FunctionAssigner(Q, [Q_u, Q_v])

Thank you

My whole point was to illustrate how one would usually go about implementing a penalty method, with the hope that you in turn could explain how you ended up with your variational formulation.

When using quadratic penalty, you usually come from an energy minimisation perspective, as I showed in my previous post.
What kind of energy are you minimizing?

It would really help for readability that you write out your variational form in mathematics , ie using latex notation, as it is way more readable than the code you have provided, and makes it easier to link to the aforementioned functional.

For instance, I don’t understand why

Is supposed to enforce the constraint.

Thank you for your reply. I am sorry, I have sent you the wrong variational problem. Here is the variational functional which I think should implement the constraint:

F = F_{\rm Poisson} + F_{\rm constraint}
where F_{\rm Poisson} is the functional related to the Poisson equation that you wrote:

F_{\rm Poisson}[u, v] = \int_\Omega (\frac{1}{2}[ \nabla (u+v) ]\cdot[ \nabla (u+v)] + f (u+v)) ~ \mathrm{d}x.

If I vary F_{\rm Poisson} with respect to u and v I obtain

\delta F_{\rm Poisson} = \int_{\Omega} \{[ \nabla (u+v) ]\cdot[ \nabla (\nu_u+\nu_v)] ] + f (\nu_u + \nu_v)\} = \int_{\Omega} \{[ -\nabla^2 (u+v) +f] (\nu_u + \nu_v)\}, where \nu_u = \delta u, \nu_v = \delta v are the test functions related to u and v, respectively.

\delta F_{\rm Poisson} =0 is the equation which I obtain if I consider the original PDE
\nabla^2 (u+v)-f=0, multiplied it by \nu_u + \nu_v, and integrated.

Following the example in your post, the functional F_{\rm constraint} which enforces the constraint u^2-v^2=g could be (?)

F_{\rm constraint}[u,v] = \frac{\alpha}{2} \int_{\Omega} dx (u^2 -v^2-g)^2.

I don’t know why in your post you wrote the integral over the boundary only. Here F_{\rm constraint} is an integral over \Omega, dx, because I want to enforce the constraint in \Omega. If I vary F_{\rm constraint} I get

\delta F_{\rm constraint} = \alpha \int_{\Omega} dx (u^2 -v^2-g) (2 u \nu_u - 2 v \nu_v).

Would I obtain the solution of the PDE above with the constraint above if I solved the variational problem \delta F_{\rm Poisson} + \delta F_{\rm constraint} = 0?

The reason for doing it over the boundary was to illustrate how the penalty method works. As you had only provided your variational formulation, it was very unclear how you had derived your scheme.

If you look at your derivation above, you observe that you do get

which to me does not look like

as you are differentiating with respect to x_i a.dx(i) is the same as \frac{\partial a}{\partial x_i}, ref: Form language — Unified Form Language (UFL) 2025.1.0 documentation

Another thing you should consider is to start with a non-zero initial guess, as you can see from your maths above, the constraint part of the residual is 0 if u=v=0.

Thank you. So may you please let me know about the question in my last post,

Thank you.

In theory yes. But the problem is highly non-linear, and needs a good initial guess for convergence.

Thank you, it converges to the correct solution indeed, but the initial guess has to be extremely close to the exact solution.

Are you aware of any way to make the algorithm converge even if one starts further away from the exact solution? Maybe some specific solver options?

Lowering the relaxation_parameter helps a bit, but if I start too far away form the exact solution lowering the relaxation parameter does not solve the issue, and the error blows up during the solver iteration.

I can provide a full minimal working example if this helps.

Thank you

You can use a linesearch, for instance by interfacing through PETSc SNES, which should be more robust than a naive newtons method