# Implementation of local projection stabilization

I have the following linear form:

f = Coefficient(V)
w = TestFunction(V)
L = k * f * w * dx,

for some reasons, I need to scale the form L with different k, which means the number k need to be different for different test function w (of course I know what is the number).

I consider to define k as a function just like f, but the problem is that when we calculate the inner product, the degree of freedom of k can still be differ even in the same cell.

Basically, I need k to be constant over the all the cells that are connected by the test function wi, when I compute the inner product k * f * wi * dx

Is there any possible approach to implement the above form?

Define k as a dolfinx.fem.Function on a DG 0 space? That would give you a (cell-wise) piecewise constant value.

Yes, DG can provide me discountinuous function.
But I want the value of k to be different for the differ test function in the same cell. I think for DG, its value will remain the same with a cell, is that true?

What should k be within each cell? You could use a higher order DG space to represent your coefficient, so that it will vary inside each cell.

It is just a number, donâ€™t have to be define within any function space. For DG, there are still the inner produce for different basis function within each cell, right?

Letâ€™s say i, j are two DOFs within the same cell, and I have:
k_i v_j dx

I want ki =0 when i is not j, is it possible?

We are suggesting defining k as a dolfinx.fem.Function because if you define it as a (say) numpy array then you wouldnâ€™t be able to use it as easily with ufl and ffcx.

Honestely, I still havenâ€™t understood what you are trying to do. Please give us an example starting from your weak form rather than the values you would like to have or not have at each dof.

Sorry I think I made it hard to understand.

Iâ€™m implementing the following modified weak form:

here, w_h and u_h are the test function and coefficient respectively. And g_h gives an approximate of the gradient, which means

I use L2-Projection to get g(u_h) and g(w_h)

For u_h, I can just define a function for g(u_h), but I think we can not play with the test function.

My idea is to creat a function, and the nodal value of this function is the interpolate for the corresponding test function, i.e., g(w_h) but I donâ€™t want each dof to infect each other, especially for the dofs in the same cell.

Any suggestions for that?

Is this method related by any chance to local projection stabilization schemes?

Can you give us a minimal example of the code (a builtin mesh, only the assembly related to the g_h term, and what you managed to do for s_h)?

I think that the functionalities shown in dolfinx/python/demo/demo_static-condensation.py at main Â· FEniCS/dolfinx Â· GitHub could help, but I havenâ€™t used them much, so Iâ€™d wait for comments by other fellow developers.

Yes, exactly, this is a local projection stabilization method.
The term s_h is used as a additional stabilization term for conservation equations.

Thanks for sharing, I will look at the demo you send.

I renamed the topic so that future users searching the name of the method will find it.

Itâ€™s possible that Iâ€™ll do some research work in future on LPS, so Iâ€™d be interested in seeing a proof of concept for your form. However, please read the following

Can you give us a minimal example of the code (a builtin mesh, only the assembly related to the g_h term, and what you managed to do for s_h)?

and give some further context. For instance, how is g built?

Letâ€™s consider the 1D burgerâ€™s equation

and we write the weak formulation as follow:

Note: the term that is marked red is an additional stabilizing term which is applid because we always have stability issue for conservation equations. And what I show here is a reduced version, you can see more details in this paper: https://www.sciencedirect.com/science/article/pii/S0021999123002486

Basically, g(\partial_x f) gives an approximation of \partial_x f, and I use L2 projection to calculate it:

msh = mesh.create_interval(comm=MPI.COMM_WORLD, 100, points=((0.0, 0.0), (1.0, 1.0)))
V = fem.functionspace(msh, ("Lagrange", 1))

u = ufl.TrialFunction(V)
v = ufl.TestFunction(V)
x = ufl.SpatialCoordinate(msh)

f = Function(V)
f.interpolate(lambda x:  10 * ufl.exp(-((x[0] - 0.5) ** 2 )/ 0.01) )
# initial data

a1 = u * v * dx
L1 =  f.dx(0) * v * dx
problem1 = LinearProblem(a1, L1, bcs=[], petsc_options={"ksp_type": "preonly", "pc_type": "lu"})
gh = problem1.solve()
# Here I use L2 Projection calculate gh which is an approximation of \partial_x f in the CG space

# Now I assemble the system with forward Euler
k = 0.01  # time step
a2 = u * v * dx
L2 = f * v * dx - k *  f * f.dx(0) * v * dx - k * (f.dx(0)- gh) * (v.dx(0) - ?) * dx


So now you see that I can not compute the g(\partial f), but donâ€™t know how to deal with g(\partial v)

I think you have to reformulate the stabilization term first. Otherwise, it would not be possible to implement any of it because the L^2 projection does not necessarily have a closed-form representationâ€¦
So here is my understanding of it: if you define (using the notation of the paper)
g_h : L^2 \to V_h to be the standard L^2 projection on V_h. Then of course you have

(\nabla u_h - g_h (\nabla u_h), v_h) = 0 \forall v_h \in V_h.

Setting v_h = g_h(\nabla w_h) yields

(\nabla u_h - g_h (\nabla u_h), g_h(\nabla w_h)) = 0

and therewith

(\nabla u_h - g_h (\nabla u_h), w_h -g_h(\nabla w_h)) =(\nabla u_h - g_h (\nabla u_h), w_h ).

So your term -? can simply be omitted.
Nevertheless, the term g_h (\nabla u_h) is still only implicitly defined. So you would need to use a mixed element method by defining another unknown function e.g. G_h s.t.

(PDE,w_h) + (\nabla u_h - G_h, w_h )= 0 \\ (G_h - \nabla u_h, v_h) = 0.

This currently also falls short in your implementation because the g_h you calculate is only the projection of the gradient of your last iterate not of the solution of your problem defined by (a_2, L_2).

@WjjSteve I have started an attempt at GitHub - francesco-ballarin/local_projection_stabilization_dolfinx, but I have stopped right before assemblying the actual LPS terms in the weak formulation. To continue, Iâ€™ll need someone with more expertise on the actual method, because if I do it myself Iâ€™ll probably end up making silly mistakes (like assuming linearity when there isnâ€™t).

Iâ€™ll add you (or anyone else willing to help) to the repo if you privately send me your github username. Youâ€™ll need to install https://viskex.github.io/ for the visualization.

Sorry what do you mean by â€śimplicity definedâ€ť.

In this case, uh should be defined in V_h, and g(\nabla uh) is also defined in V_h, but since the gradient of u_h is not countinuous, thatâ€™s why I use projection method.

The point is that uh is just one function, which could be projected easily, but I can not do the same to every test function.

My github username is just the same as in this community, and I guess you already added me, thanks.

By implicit I mean that there is not necessarily a closed-form explicit representation of the projection. The L^2 projection is defined as a solution to a solution system, i.e. P_{L^2} u \in V_h solves

(P_{L^2} u - u, v_h)_{L^2}= 0 \forall v_h \in V_h.

This means that in the assembly step of a linear FEM scheme, it should not be possible without any modifications to assemble the projection of a Trial or Test Function because a quadrature rule can only be applied to an explicit representation out of the box in FEniCS. You could change the assembly of the matrices but that is very advanced, so I would suggest using the reformulation.

Since you use forward Euler, you probably do not need the mixed method. But if you do not observe the dissipative properties, you might want to switch to a backward Euler method which trivially gives you the dissipative a priori estimates and then you need to use a mixed method as well.

1 Like