ArityMismatch in Nonlinear Heat Equation in 1D

Hello, I am trying to solve the nonlinear heat equation in space-time

u_t - \Delta u + g(x, t) + u^2 = 0

for the manufactured true solution

u(x, t) = \cos(t)\sin(\pi x)

which yields

g(x, t) = \sin(\pi x)(\sin(t) - \pi^2\cos(t) - \cos^2(t)\sin(\pi x))

The weak form of this nonlinear heat equation in space-time slab S_n = [t_n, t_{n+1}] \times \Omega is

\int_{t_n}^{t_{n+1}}\Big(u_t(\cdot, t), v(\cdot, t)\Big) + \Big(\nabla u(\cdot, t), \nabla v(\cdot, t)\Big) + \Big((u(\cdot, t)^2, v(\cdot, t)\Big) = 0

where \Big(,\Big) is the spatial L^2 inner product on \Omega.

Project to finite dimensional function space where u(x,t) = \sum_{i=0}^{q_1}\alpha_i(x)l_i(t) and v(x,t) = \sum_{j=0}^{q_1-1}\beta_j(x)\tilde{l}_j(t) where l_i(t) are Lagrange basis function in time of degree i, rearrange things a bit, and use Gaussian quadrature with K weights, w_k, and points, s_k^n, for the time integration and you get

\sum_{k=0}^{K-1}w_k\tilde{l}_j(s_k^n)\Big[\sum_{i=0}^{q_1}\Big[\dot{l}(s_k^n)\Big(\alpha_i, \beta_j\Big) + l_i(s_k^n)\Big(\nabla\alpha_i, \nabla\beta_j\Big)\Big] + \Big(g(\cdot, s_k^n), \beta_j\Big) + \\\Big((\sum_{i=0}^{q_1}\alpha_il_i(s_k^n))^2, \beta_j\Big)\Big] \,\,\, \forall j \in {0, ..., q_1-1}

I use Fenics to find the \alpha_i functions since everything else is known. However, when I try to run my code, I get the following error when trying to solve the problem:

ufl.algorithms.check_arities.ArityMismatch: Adding expressions with non-matching form arguments ('v_0',) vs ().

My minimum working code is as follows.

from fenics import *
import numpy as np
set_log_level(LogLevel.WARNING)

def Lagrange(initial, final, degree, j, x):
	I = np.linspace(initial, final, degree+1)
	z = 1.0
	for k in range(degree+1):
		if k!= j:
			z *= (x - I[k])/(I[j] - I[k])
	return (z)

def Lagrange_deriv(initial, final, degree, j, x):
	I = np.linspace(initial, final, degree+1)
	z = 0.0
	for m in range(degree+1):
		v = 1.0
		for k in range(degree+1):
			if k != j and k!=m:
				v *= (x - I[k])/(I[j] - I[k])
		if m != j:
			z += (1.0/(I[j] - I[m]))*v
	return (z)

def TestLagrange(initial, final, degree, j, x):
	I = np.linspace(initial, final, degree)
	z = 1.0
	if degree > 1:
		for k in range(degree):
			if k!= j:
				z *= (x - I[k])/(I[j] - I[k])
	return (z)

t0 = 0.0
T = 1.0
N = 50
Nt = N
Nx = N
q1 = 2
deg1 = 2

t1 = np.linspace(t0, T, Nt+1)
mesh = UnitIntervalMesh(Nx)

u_init = Expression('cos(t)*sin(pi*x[0])', degree=4, t=t0)
g = 'sin(pi*x[0])*(sin(t) - pi*pi*cos(t) - cos(t)*cos(t)*sin(pi*x[0]))'

Vh1 = FunctionSpace(mesh, 'CG', deg1)

if q1 == 1:
	V1 = FunctionSpace(mesh, 'CG', deg1)
else:
	P = FiniteElement('CG', interval, deg1)
	element_array = []
	for i in range(q1):
		element_array.append(P)
	element1 = MixedElement(element_array)
	V1 = FunctionSpace(mesh, element1)

if q1 == 1:
	tup1 = (0.0)
else:
	tup1 = ()
	tup_single_system1 = (0.0,)
	for i in range(q1):
		tup1 = tup1 + tup_single_system1

u_D = Constant(tup1)

def boundary_D(x, on_boundary):
	return on_boundary

bc1 = DirichletBC(V1, u_D, boundary_D)

alpha_init = interpolate(u_init, Vh1)
alpha_n = interpolate(u_init, Vh1)

alpha = []
alpha.append(alpha_init)

for n in range(Nt):
	tL = t1[n]
	tR = t1[n+1]
	(p,w) = np.polynomial.legendre.leggauss(3*q1)
	p = ((tR - tL)/2.0)*p + (tL + tR)/2.0
	w = ((tR - tL)/2.0)*w
	alpha_loc = Function(V1)
	beta = TestFunctions(V1)
	F = Constant(0.0)
	for j in range(q1):
		for k in range(len(p)):
			TLagr = Constant(TestLagrange(tL, tR, q1, j, p[k]))
			g_k = interpolate(Expression(g, degree=4, t=p[k]), Vh1)
			p1 = Constant(0.0)
			p2 = Constant(0.0)
			for i in range(q1+1):
				Lagr = Constant(Lagrange(tL, tR, q1, i, p[k]))
				dLagr = Constant(Lagrange_deriv(tL, tR, q1, i, p[k]))
				if i == 0:
					p1 += dLagr*alpha_n*beta[j] + Lagr*alpha_n.dx(0)*beta[j].dx(0)
					p2 += Lagr*alpha_n
				else:
					p1 += dLagr*alpha_loc[i-1]*beta[j] + Lagr*alpha_loc[i-1].dx(0)*beta[j].dx(0)
					p2 += Lagr*alpha_loc[i-1]
			weight = Constant(w[k])
			F += weight*TLagr*(p1 + g_k*beta[j] + p2*p2*beta[j])
	F = F*dx
	solve(F == 0, alpha_loc, bc1)

The arity mismatch is because the form F looks like

F(\beta) = C + \tilde{F}(\beta)\text{ ,}

where C is a constant in \mathbb{R} and \tilde{F}(\beta) is linear in the TestFunction \beta. The form F passed to solve(F==0,...) must be linear in a TestFunction (i.e., all terms must be of arity 1). The form F is not linear in \beta unless C = 0. In the code, C is Constant(0), but the purpose of Constants is that the code must work for any numerical value, so there is no special treatment for the case where the numerical value happens to be zero.

You can get the desired result by using a bare floating-point literal instead of a Constant, i.e.,

F = 0.0 #Constant(0.0)

and also

p1 = 0.0 #Constant(0.0)

to initialize p1. (Since p2 is multiplied by beta later, it can be left as a Constant without error, but there is also no harm in using p2 = 0.0.) Alternatively, using

F = Constant(0.0)*beta[0]

and

p1 = Constant(0.0)*beta[0]

works without relying on the form compiler’s special treatment of zeros.

Thank you so much! I appreciate your help. I knew it would make sense when it was explained to me.