Heat equation, backward euler scheme, strange results

Hi everyone,

I am solving the heat equation with a standard backward euler scheme, but I am getting some very strange results. My equation is

\begin{align*} \rho C_p \frac{\partial u}{\partial t} - \kappa \Delta u&= 0, \\ \frac{\partial u}{\partial n} &= \beta (u_\text{amb} - u), \end{align*}

where \rho C_p = 3000000, \kappa = 50, u_{\text{amb}}=300, \beta = 15. I am assuming that the problem is because \rho C_p is so large, because the scheme works when all the coefficients are set to 1. Any insight would be appreciated!

Here is a minimum working example:

from mpi4py import MPI
from dolfinx import fem, mesh, io
from dolfinx.fem import petsc
from ufl import TrialFunction, TestFunction, grad, dx, ds, inner

# material properties
rho_Cp = 3e6   # density * specific heat capacity
kappa = 50     # thermal conductivity
beta = 15      # convection coefficient

def calculate_errors(domain, x):
    my_form = fem.form(inner(x, x) * dx)
    return np.sqrt(domain.comm.allreduce(fem.assemble_scalar(my_form), op=MPI.SUM))                

domain = mesh.create_unit_square(MPI.COMM_WORLD, 100, 100, mesh.CellType.triangle)
V = fem.FunctionSpace(domain, ("CG", 1))

# Create initial condition (25 degrees C = approx 300K)
def initial_condition(x):
    return 300 + 0*x[0]

u_amb = fem.Function(V, name="u_amb") # ambient temperature
u_amb.interpolate(initial_condition)
u_old = fem.Function(V, name="u_old") # initial temperature
u_old.interpolate(initial_condition)
u_new = fem.Function(V, name="uh") # name of solution function 
u_new.interpolate(initial_condition)

# Create boundary condition
tdim = domain.topology.dim
fdim = tdim - 1
domain.topology.create_connectivity(fdim, tdim)
boundary_facets = mesh.exterior_facet_indices(domain.topology)
boundary_dofs   = fem.locate_dofs_topological(V=V, entity_dim=1, entities=boundary_facets) 

# writing the function to an xdmf file
xdmf_t = io.XDMFFile(domain.comm, "temperature.xdmf", "w", encoding=io.XDMFFile.Encoding.ASCII)
xdmf_t.write_mesh(domain)
xdmf_t.write_function(u_new, 0)

# Define temporal parameters (seconds)
t = 0 # Start time
T = 0.1 # Final time
num_steps =  10
dt = T / num_steps # time step size

tol = 1e-6
max_iterations = 100

for i in range(num_steps):
    t += dt  
    u, v = TrialFunction(V), TestFunction(V)
    a = (rho_Cp * inner(u, v) * dx 
         + dt * kappa * inner(grad(u), grad(v)) * dx 
         + dt * beta * inner(u, v) * ds
        )
    L = (rho_Cp * inner(u_old, v) * dx + dt * beta * inner(u_amb, v) * ds
        )
    problem = petsc.LinearProblem(a, L)
    u_new = problem.solve()
    u_new.name = "uh"
    u_old = u_new # Update solution at previous time step (u_n)
    xdmf_t.write_function(u_new, t)
xdmf_t.close()

and I get this result


which shows some strange oscillations in the bottom right hand corner.

Please do let me know if you have any insight into why this is happening!

Many thanks,
Katie

It seems you’re only weakly imposing boundary data through a Newton type boundary condition, so I expect the Gibbs phenomenon you’re seeing it likely due to a time step which is too large. What happens if you choose a much smaller time step size or an adaptive step which attempts to predict and control the maximum change in your solution between steps?

Hi nate,

Thank you for your reply!

I tried dt = 10^{-2}, 10^{-4}, 10^{-7}, 10^{-10} and 10^{-13}, and they produce the same behaviour. I wouldn’t imagine needing a timestep smaller than that?

Best,
Katie

Hi Nate,

It seems that the issue was: instead of writing

problem = petsc.LinearProblem(a, L)

I had to specify

problem = petsc.LinearProblem(a, L, petsc_options={"ksp_type": "preonly", "pc_type": "lu"})

I’m guessing that due to the large coefficients in the matrix, a direct solver was more suited to this problem. Doing it this way completely resolves the issue!

Cheers,
Katie