Inner vs index notation

Hello everyone,

I was wondering what is the difference between the index notation and the ufl.inner() function. To be more precise, I’m doing the inner product of two rank 2 tensors. When the shapes of these tensors are not compatible I expect to get an error both with index notation and ufl.inner(). However, when I use index notation I obtain some result. How does ufl interpret this shape mismatch and still provide a result?

Consider the following MWE:

import ufl as ufl

i = ufl.Index()
j = ufl.Index()

a = ufl.as_tensor([[0, 0],
               [0, 0]])

b = ufl.as_tensor([[0, 0]])

c = a[i,j]*b[i,j]
print(f'Result using index notation: {c}')

c_ = ufl.inner(a,b)
print(f'Result using ufl.inner notation: {c_}')

I guess I’m misunderstanding something in the index notation.

Thanks in advance.

Ufl looks for the smallest common subset of indices when using pure index notation.
This happens in: ufl/ufl/algebra.py at main · FEniCS/ufl · GitHub

Thus it multiples the first row of a by b and sums it up.

Inner has additional checks

to ensure that the dimensions are matching

2 Likes

Great, thanks for the response.