Fenicsx JIT vs directly expressing tensor operations

Dear all
I have a question regarding the performance benefits (or their lack) by directly expressing the tensor operation component-wise vs using the jit compiler of symbolic ufl operations.
Is it convenient or it hinders the assembly performance?

Thanks

This is a fairly generic question, which I believe deserve some more context.
What specific operations are you thinking of? Do you have an example form?

For example, in continuum mechanics, the weak contribution of each material to the variational form is S:grad(v)*x

In terms of assembly time and solving, Is it more performant to define S as a python function (as in the web examples) and use ufl.inner and ufl.grad operators, or defining each component of S ad of grad(v) as scalar functions and express the inner product directly?

They should boil down to more or less equivalent forms (when derivative loops are expanded), so I would expect the same performance.

What I would do is to write the two forms and time the performance.

Usually I prefer using the gradient syntax, as it saves alot of work and increases readability.