Strange behavior when solving a PDE with an auxiliary variable

Hello,
Given the unit disk \Omega and its boundary \partial \Omega, consider the PDE for the vector \vec{u}
\nabla^2 u^i = 0
with BCs
u^1 = f \textrm{ on } \partial \Omega
\hat{n}\cdot \vec{u} = 0 \textrm{ on } \partial \Omega.

This is a minimal working example which must be considered in the context of a more complex PDE, which I am not showing here. For this more complex PDE, I want to introduce a new variable \vec{\phi}, which is a function of \vec{u}, in order to impose a BC on \vec{\phi}. In the minimal working example, let us simplifiy things and say that this change of variable is as simple as possible:

\phi^i = u^i

[please don’t tell me that this change of variable is trivial, I have specified that this is an example to make things simple to get feedback, and even this simple example produces an odd behavior which I would like to understand.]

So I rewrite my PDE for \vec{u} and \vec{\phi} is

\nabla^2 u^i = 0
u^i - \phi^i = 0
with BCs
u^1 = f \textrm{ on } \partial \Omega
\hat{n}\cdot \vec{\phi} = 0 \textrm{ on } \partial \Omega.

I solve it here by implementing the second BC with Nitsche’s method

from __future__ import print_function
from symtable import Function
from fenics import *
import ufl as ufl
from mshr import *

disk = Circle( Point( 0, 0 ), 1 )
domain =  disk
mesh = generate_mesh(domain, 16)

boundary = 'on_boundary'

r_mesh = mesh.hmin()
#nitsche's parameter
beta = 1e1

xdmffile_v = XDMFFile( "v.xdmf" )
xdmffile_phi = XDMFFile( "phi.xdmf" )
xdmffile_facet_normal = XDMFFile( "facet_normal.xdmf" )
xdmffile_n = XDMFFile( "n.xdmf" )

i, j = ufl.indices( 2 )

P_v = VectorElement('P', triangle, 2)
P_phi = VectorElement('P', triangle, 2)
element = MixedElement([P_v, P_phi])
Q_v_phi = FunctionSpace(mesh, element)
Q_v = Q_v_phi.sub(0).collapse()
Q_phi = Q_v_phi.sub(1).collapse()

facet_normal = FacetNormal( mesh )


#the exact solution of the PDE
class v_exact_expression( UserExpression ):
    def eval(self, values, x):
        values[0] = 1 + (x[0]**2) + 2*(x[1]**2)
        values[1] = 1 + (x[0]**2) - 2*(x[1]**2)
    def value_shape(self):
        return (2,)

class laplacian_v_exact_expression( UserExpression ):
    def eval(self, values, x):
        values[0] = 0
        values[1] = 0
    def value_shape(self):
        return (2,)

# Define variational problem
v_phi = Function( Q_v_phi )
v, phi = split(v_phi)
nu_v, nu_phi = TestFunctions(Q_v_phi)
f = Function(Q_v)
v_exact = Function(Q_v)
phi_exact = Function(Q_phi)
J_v_phi = TrialFunction(Q_v_phi)
v_i = Function(Q_v)
phi_i = Function(Q_phi)

assigner = FunctionAssigner(Q_v_phi, [Q_v, Q_phi])


def calc_normal_cg2(mesh):
    n = FacetNormal(mesh)
    V = VectorFunctionSpace(mesh, "CG", 2)
    u = TrialFunction(Q_v)
    v = TestFunction(Q_v)
    a = inner(u, v) * ds
    l = inner(n, v) * ds
    A = assemble(a, keep_diagonal=True)
    L = assemble(l)

    A.ident_zeros()
    nh = Function(V)
    solve(A, nh.vector(), L)
    return nh

def facet_normal_smooth():
    u = calc_normal_cg2(mesh)
    return as_tensor(u[i], (i))

f.interpolate( laplacian_v_exact_expression( element=Q_v.ufl_element() ) )
v_exact.interpolate( v_exact_expression( element=Q_v.ufl_element() ) )

F_v = (((v[i]).dx( j )) * ((nu_v[i]).dx( j )) + f[i] * nu_v[i]) * dx - (  facet_normal[i] * ((v[j]).dx( i )) * nu_v[j]) * ds
F_phi = (phi[j] - v[j] )*nu_phi[j]*dx
F_N = beta / r_mesh * ( phi[i] * facet_normal[i] ) * (facet_normal[j] * nu_phi[j]) * ds
F = F_v + F_phi + F_N

profile_v_0 = Expression( ('1 + pow(x[0], 2) + 2*pow(x[1], 2)'), element = Q_v_phi.sub(0).sub(0).ufl_element())
bc = DirichletBC( Q_v_phi.sub(0).sub(0), profile_v_0, boundary )

J = derivative( F, v_phi, J_v_phi )
problem = NonlinearVariationalProblem( F, v_phi, bc, J )
solver = NonlinearVariationalSolver( problem )

solver.solve()

v_dummy, phi_dummy = v_phi.split( deepcopy=True )

xdmffile_v.write( v_dummy, 0 )
xdmffile_phi.write( phi_dummy, 0 )

xdmffile_facet_normal.write( facet_normal_smooth(), 0 )
xdmffile_n.write( project(facet_normal_smooth(), Q_v), 0 )

print("\int_\Omega [(phi-v)]^2 dx = ", assemble((( dot(phi_dummy - v_dummy, phi_dummy - v_dummy)  ))*dx))
print("\int_{\partial \Omega} [(phi-v)]^2 ds = ", assemble((( dot(phi_dummy - v_dummy, phi_dummy - v_dummy)  ))*ds))

The result is

Solving nonlinear variational problem.
  Newton iteration 0: r (abs) = 8.634e+01 (tol = 1.000e-10) r (rel) = 1.000e+00 (tol = 1.000e-09)
  Newton iteration 1: r (abs) = 6.755e-13 (tol = 1.000e-10) r (rel) = 7.824e-15 (tol = 1.000e-09)
  Newton solver finished in 1 iterations and 1 linear solver iterations.
\int_\Omega [(phi-v)]^2 dx =  0.18481804693885615
\int_{\partial \Omega} [(phi-v)]^2 ds =  18.896337188259185

As you can see from the output of the code and from the screenshot phi on the left and v on the right, the fields phi and v differ on \partial \Omega.

How is this possible, given that I implemented the equation u^i - \phi^i = 0 in F_phi? Note that adding (phi[j] - v[j] )*nu_phi[j]*ds to F_phi does not solve the problem.

Thank you

So through your second equations you would like to enforce a condition on u, I assume? It seems to me that that is not what’s happening: the equation for u is decoupled and stand alone. Think of how this is assembled; the u rows of the matrix don’t see any effect of \phi, so you’d get something like:

K = [K_{11} , 0; K_{21}, K_{22} ]
The inverse: K^{-1} = [K_{11}^{-1} , 0; -K_{22}^{-1} K_{21} K_{11}^{-1}, K_{22}^{-1} ]
and the solution from the first set of unknowns (u) has completely decoupled from that of the second (\phi).

What you are actually solving is:

  • A vector Laplace equation for u, with Dirichlet conditions on the x component, and homogeneous Neumann conditions in the remaining component directions.
  • Once this is solved, \phi as an L^2 projection of the u just obtained, with the added penalty on the boundary.

[ Unrelated, but aren’t your boundary conditions incompatible? For n = e_y you’d be attempting to prescribe both u\cdot e_y = f and u\cdot e_y = 0 ]

I see, thank you. Let me try another way to formulate my question then:
Given the PDE
\nabla ^2 u^i = f^i
with BCs
[\vec{v}(\vec{u})].\vec{u} = 0 \textrm{ on } \partial \Omega
[plus another BC for u, unimportant here].
Here \vec{v} is a vector field which depends on \vec{u}: v^i(u) = A^i_j(u) \, n^j, where n is the facet normal of \partial \Omega which is independent of u.

Like I discussed here, I would like to implement the BC [\vec{v}(\vec{u})].\vec{u} = 0 \textrm{ on } \partial \Omega with Nitsche’s method, but this is tricky because it is a nonlinear BC with respect to u.

However, I would like to try this workaround: set \phi^i \equiv u^j A^i_j(u), then the BC reads \phi^i \hat{n}^i= 0 which is linear, so maybe Nitsche’s method will work now.(?)

Can I just solve the original PDE by simply reformulating it in terms of u and \phi as

\nabla ^2 u^i = f^i
u^i = \phi^i
with BCs
\phi^i \hat{n}^i= 0 \textrm{ on } \partial \Omega
[plus the other BC for u]

?

No, I’m afraid such a set-up simply mathematically decouples for a Laplace problem on u, and a specialized projection of u onto \phi. These is no information from \phi flowing back to u.

  • Could you explain why you’re not happy with my suggestions for getting a non-linear Nitsche approach working in your other post?
  • Are you bound to legacy FEniCS? I believe FEniCSx introduces boundary spaces, so that you might consider a Lagrange multiplier approach.

Yes, there are boundary spaces in FEniCSx, see for instance: Coupling PDEs of multiple dimensions — FEniCS Workshop

1 Like