Hello, I want to ask a question about the variational formulation
I have the following linear form:
f = Coefficient(V)
w = TestFunction(V)
L = k * f * w * dx,
for some reasons, I need to scale the form L with different k, which means the number k need to be different for different test function w (of course I know what is the number).
I consider to define k as a function just like f, but the problem is that when we calculate the inner product, the degree of freedom of k can still be differ even in the same cell.
Basically, I need k to be constant over the all the cells that are connected by the test function wi, when I compute the inner product k * f * wi * dx
Is there any possible approach to implement the above form?
Thanks in advance!
Yes, DG can provide me discountinuous function.
But I want the value of k to be different for the differ test function in the same cell. I think for DG, its value will remain the same with a cell, is that true?
It is just a number, don’t have to be define within any function space. For DG, there are still the inner produce for different basis function within each cell, right?
Let’s say i, j are two DOFs within the same cell, and I have:
k_i v_j dx
We are suggesting defining k as a dolfinx.fem.Function because if you define it as a (say) numpy array then you wouldn’t be able to use it as easily with ufl and ffcx.
Honestely, I still haven’t understood what you are trying to do. Please give us an example starting from your weak form rather than the values you would like to have or not have at each dof.
I’m implementing the following modified weak form:
here, w_h and u_h are the test function and coefficient respectively. And g_h gives an approximate of the gradient, which means
I use L2-Projection to get g(u_h) and g(w_h)
For u_h, I can just define a function for g(u_h), but I think we can not play with the test function.
My idea is to creat a function, and the nodal value of this function is the interpolate for the corresponding test function, i.e., g(w_h) but I don’t want each dof to infect each other, especially for the dofs in the same cell.
Any suggestions for that?
Thanks for taking your time!
I renamed the topic so that future users searching the name of the method will find it.
It’s possible that I’ll do some research work in future on LPS, so I’d be interested in seeing a proof of concept for your form. However, please read the following
Can you give us a minimal example of the code (a builtin mesh, only the assembly related to the g_h term, and what you managed to do for s_h)?
and give some further context. For instance, how is g built?
Note: the term that is marked red is an additional stabilizing term which is applid because we always have stability issue for conservation equations. And what I show here is a reduced version, you can see more details in this paper: https://www.sciencedirect.com/science/article/pii/S0021999123002486
Basically, g(\partial_x f) gives an approximation of \partial_x f, and I use L2 projection to calculate it:
msh = mesh.create_interval(comm=MPI.COMM_WORLD, 100, points=((0.0, 0.0), (1.0, 1.0)))
V = fem.functionspace(msh, ("Lagrange", 1))
u = ufl.TrialFunction(V)
v = ufl.TestFunction(V)
x = ufl.SpatialCoordinate(msh)
f = Function(V)
f.interpolate(lambda x: 10 * ufl.exp(-((x[0] - 0.5) ** 2 )/ 0.01) )
# initial data
a1 = u * v * dx
L1 = f.dx(0) * v * dx
problem1 = LinearProblem(a1, L1, bcs=[], petsc_options={"ksp_type": "preonly", "pc_type": "lu"})
gh = problem1.solve()
# Here I use L2 Projection calculate gh which is an approximation of \partial_x f in the CG space
# Now I assemble the system with forward Euler
k = 0.01 # time step
a2 = u * v * dx
L2 = f * v * dx - k * f * f.dx(0) * v * dx - k * (f.dx(0)- gh) * (v.dx(0) - ?) * dx
So now you see that I can not compute the g(\partial f), but don’t know how to deal with g(\partial v)
I think you have to reformulate the stabilization term first. Otherwise, it would not be possible to implement any of it because the L^2 projection does not necessarily have a closed-form representation…
So here is my understanding of it: if you define (using the notation of the paper) g_h : L^2 \to V_h to be the standard L^2 projection on V_h. Then of course you have
So your term -? can simply be omitted.
Nevertheless, the term g_h (\nabla u_h) is still only implicitly defined. So you would need to use a mixed element method by defining another unknown function e.g. G_h s.t.
This currently also falls short in your implementation because the g_h you calculate is only the projection of the gradient of your last iterate not of the solution of your problem defined by (a_2, L_2).
@WjjSteve I have started an attempt at GitHub - francesco-ballarin/local_projection_stabilization_dolfinx, but I have stopped right before assemblying the actual LPS terms in the weak formulation. To continue, I’ll need someone with more expertise on the actual method, because if I do it myself I’ll probably end up making silly mistakes (like assuming linearity when there isn’t).
I’ll add you (or anyone else willing to help) to the repo if you privately send me your github username. You’ll need to install https://viskex.github.io/ for the visualization.
Sorry what do you mean by “implicity defined”.
In this case, uh should be defined in V_h, and g(\nabla uh) is also defined in V_h, but since the gradient of u_h is not countinuous, that’s why I use projection method.
The point is that uh is just one function, which could be projected easily, but I can not do the same to every test function.
By implicit I mean that there is not necessarily a closed-form explicit representation of the projection. The L^2 projection is defined as a solution to a solution system, i.e. P_{L^2} u \in V_h solves
(P_{L^2} u - u, v_h)_{L^2}= 0 \forall v_h \in V_h.
This means that in the assembly step of a linear FEM scheme, it should not be possible without any modifications to assemble the projection of a Trial or Test Function because a quadrature rule can only be applied to an explicit representation out of the box in FEniCS. You could change the assembly of the matrices but that is very advanced, so I would suggest using the reformulation.
Since you use forward Euler, you probably do not need the mixed method. But if you do not observe the dissipative properties, you might want to switch to a backward Euler method which trivially gives you the dissipative a priori estimates and then you need to use a mixed method as well.