I am trying to simulate the deformation of an elastic structure under a point load using Fenics, the code runs well and gives qualitative results that were expected. However, I am having trouble lining up the values of the applied loads to those predicted from other models, I believe the issue comes from the fact that I have non-dimesnionalised the system but am unsure how to non-dimensionalise the point load. The code for my problem is:
class ArchDynamicsEquation(df.NonlinearProblem):
def __init__(self, a, L, bcs, V):
df.NonlinearProblem.__init__(self)
self.L = L
self.a = a
self.bcs = bcs
self.P = 0.0
self.V = V
def F(self, b, x):
df.assemble(self.L, tensor=b)
point_source = df.PointSource(self.V.sub(1), df.Point(0.0, R/L_tilde), self.P)
point_source.apply(b)
for bc in self.bcs:
bc.apply(b, x)
def J(self, A, x):
df.assemble(self.a, tensor=A)
for bc in self.bcs:
bc.apply(A, x)
Where R is the original length scale of the problem and then L_tilde is the non-dimensionalisation parameter, my question is how I should non-dimensionalise P, specifically what dimension fenics takes for the point source, mathematically I know that if it takes it to be a three-dimensional Gaussian I would have to divide by L_tilde^3. However, the problem here is defined in two dimensions, does that mean I only have to divide by L_tilde^2, but then this would seem to not line up properly with the definition of Newtons?