Subdomains, problems following Tutorial

I am working on a mode extraction problem. I think I am successful when working on a single material but run into problems when trying to assign different materials (subdomains.).
The subdomains are defined in gmesh, and i retrieve them via

mesh, cell_tags, facet_tags = gmshio.read_from_msh(p, MPI.COMM_SELF, 0, gdim=3)


The following (one-material/domain) code seems to work (does not create errors, the mode extraction and problem solution is omitted here):

from dolfinx.fem import functionspace, Function
from dolfinx.mesh import create_box
import ufl
from mpi4py import MPI
import numpy as np
from ufl import grad
import dolfinx

ll = np.array([0,0,0]) #lower left corner [m]
ur = np.array([1.5,1,0.8]) #upper right corner [m]
dims = ur - ll
meshSizeMin = 0.1
numCellsPerDim = np.ceil(dims / meshSizeMin).astype(np.int64)
mesh = create_box(MPI.COMM_SELF, [list(ll),list(ur)],list(numCellsPerDim))

V = functionspace(mesh, ("Lagrange", 1, (mesh.geometry.dim,)))
# V = functionspace(mesh, ("DG", 0, (mesh.geometry.dim,)))
u = ufl.TrialFunction(V)
v = ufl.TestFunction(V)

μ = 0.5
λ = 0.5

def ϵ(v): #Strain Function (Dehnung)
    return ufl.sym(grad(v))

def σ(v): #Stress (Spannung)
    return 2.0*μ*ϵ(v) + λ*ufl.nabla_div(v)*ufl.Identity(len(v))


My aim is to make λ and μ functions, following the tutorial (Defining subdomains for different materials — FEniCSx tutorial). Here I only do this for λ to keep the example minimal:

λ = Function(V)
cabCells = [1,2,3]
λ_cab = 0.7
λ.x.array[cabCells] = np.full_like(cabCells, λ_cab, dtype=dolfinx.default_scalar_type)

I get the following error:

ValueError                                Traceback (most recent call last)
Cell In[41], line 1
----> 1 print(σ(u))

Cell In[38], line 29, in σ(v)
     28 def σ(v): #Stress (Spannung)
---> 29     return 2.0*μ*ϵ(v) + λ*ufl.nabla_div(v)*ufl.Identity(len(v))

File ~/miniconda3/envs/fem/lib/python3.10/site-packages/ufl/, in _mul(self, o)
    181     return NotImplemented
    182 o = as_ufl(o)
--> 183 return _mult(self, o)

File ~/miniconda3/envs/fem/lib/python3.10/site-packages/ufl/, in _mult(a, b)
    153     ti = ai + bi
    155 else:
--> 156     raise ValueError(f"Invalid ranks {r1} and {r2} in product.")
    158 # TODO: I think applying as_tensor after index sums results in
    159 # cleaner expression graphs.
    160 # Wrap as tensor again
    161 if ti:

ValueError: Invalid ranks 1 and 2 in product.

I have started to work around this, using the dot product etc, but essentially I am not fully understanding whats happening here. I assume I might have to integrate λ and μ?
Thanks for any help!

DOLFINx version: 0.7.3 based on GIT commit: b488a495d3d74a048e5866f437f2df20b075d907 of

<class 'numpy.complex128'>


If you are defining \lambda as λ=Function(V), then \lambda is a vector. Then, the expression λ*ufl.nabla_div(v)*ufl.Identity(len(v)) is wrong since the tensor rank of \lambda is 1 and the tensor rank of ufl.nabla_div(v)*ufl.Identity(len(v)) is 2 (a matrix). To fix this, you only need to change the order as ufl.nabla_div(v)*ufl.Identity(len(v))*λ. However, this will also give you an error of shape mismatch since now you are adding a vector and a tensor 2.0*μ*ϵ(v). To properly define \lambda, you can follow the tutorial and define a scalar space

DG0 = functionspace(mesh, ("DG", 0))

cabCells = [1,2,3]
λ_cab = 0.7
λ.x.array[cabCells] = np.full_like(cabCells, λ_cab, dtype=dolfinx.default_scalar_type)


Just to be sure:
I can do all my other things with:
V = functionspace(mesh, ("Lagrange", 1, (mesh.geometry.dim,)))
As I had it, i just define this additional scalar space and use it to create these functions on that space?
Feels like cheating somehow.
Would be grateful if you could confirm,
Otherwise: Thank you so much! all seems to be working now!

Yes because V is the function space of your problem.

The idea behind the DG0 space comes from the fact that the degrees of freedom are defined by a single constant per cell. This coincides with the tags assigned for marking domains using a mesh generator, like GMSH. See, for instance

Thanks a lot for taking the time to explain/clarify!