How to calculate the optimized size of mesh?

I am solving a simple diffusion equation in a cubic domain with Dirichlet boundary condition. The initial condition is a delta function at the center of the cubic domain.

I calculated this using an unstructured tetrahedron mesh where the maximum volume of the tetrahedron cell is (i) 0.5 nm^3, (ii) 1.0 nm^3, and (iii) 2.0 nm^3. I compared the calculation result with an analytical solution, but the calculation error which presumably stems from the spatial discretization is too large even with the smallest mesh among those three.

I am wondering if there is any way to theoretically calculate the maximum size of the mesh to achieve the desired accuracy. And also I would like to ask you if my calculation is correct since I feel the error displayed here is fairly large. I attach my MWE below.

import numpy as np
from fenics import *

# Parameters
dt     = 0.1  		# time step size
t_max  = 210        # total time length
Diff   = 0.123      # diffusion coefficient


# Create mesh and define function space
mesh = Mesh(filename)
V = FunctionSpace(mesh, 'P', 1)

# Define boundary condition
def boundary_D(x, on_boundary):
    return on_boundary
bc = DirichletBC(V, Constant(0.0), boundary_D)


# Set initial condition
u_n = Function(V)
u_n.vector()[:] = 0.0
x_init, y_init, z_init = 20.0, 20.0, 20.0
p = PointSource(V, Point(x_init,y_init,z_init),1.0)
p.apply(u_n.vector())


# Define variational problem
u = TrialFunction(V)
v = TestFunction(V)
a = u*v*dx + Diff*dt*dot(grad(u), grad(v))*dx
L = u_n*v*dx


# Time stepping
u = Function(V)
t = 0
for n in range(int(t_max/dt)):
    t += dt
    solve(a == L, u, bc)
    u_n.assign(u)

Have you tried investigating if reducing the size of the time-step influences the accuracy of your problem?

Yes as you can see in the figure, I tried different time-step dt = 0.1, 0.2, and 1 (the difference of time-step is indicated by the shape). Reducing the size of time-step does not change the numerical result much and there is still a large error to the analytical result.

Im sorry, I didnt catch that first time I looked at the figure.
It seems like your spatial error is dominating the temporal discretization error.

One thing you could do is to look at the temporal and spatial convergence rates.
That would be assuming
e_i =sqrt(inner(u-u_ex,u-u_ex)*dx) is proportional to h^r,
where e_i is the ith refinement level

Then by refining the mesh, you can obtain the rate r by computing
r = log(e_{i+1}/e_{i})/log(1/2)
This rate should be equal to two if the temporal error has been eliminated.

1 Like

Thank you for the information. I will calculate the convergence rate and estimate the required size of the mesh.

I just carried out another simulation using finer mesh (maximum mesh size is 0.1 nm^3), then the result went beyond the analytical solution and the error became larger (figure attached).
Apparently it is not converging to the analytical solution by reducing the size of the mesh, which looks really odd to me.
Do you think this can happen under some circumstances? Or are there any fatal errors in my calculation?

There are several things you can consider:

  1. Check if you get the analytical solution for the first time step.
  2. Check if the solution changes by using a structured mesh (like BoxMesh(Point(0,0,0), Point(40,40,40),N,N,N))
  3. Check if the solution changes by increasing the degree of the finite element function space.

You should also note that the error can be related to using the point source. See the other post I helped you out with:

1 Like

Keep in mind that \delta(x-x_0) \notin L_2(\Omega), so I would not be surprised if you recover sub optimal \mathcal{O}(h) rates of convergence based on the initial condition alone. Locally refining the mesh around the point source may be a better plan. See also here.

3 Likes