Performance with JIT for FENICS and or FENICSx

Hi, I was wondering how the JIT in FENICS could be turned off. I am noticing that there is a large variance between computations being run either multiple times or even running it once. Does FENICS have a large amount of randomness that could effect this? Also, if I keep on increasing the amount of iterations I am noticing that the time isn’t linear. Is it known how that scales up?

You cannot turn of JIT if you are using the Python-layer of DOLFIN/DOLFINx. However, there are several things you can do to minimize the time JIT takes.

  1. Make sure that your variational form only need to be compiled once, as shown in:
    Out of memory error in frequency for loop - #4 by nate
    and
    Reassign scalar value within for loop - #2 by dokken

Without a minimal example illustrating this behavior, it is hard to explain why you are experiencing this. As DOLFIN is a very flexible tool, it allows the user to do things in many ways, and all of them are not always efficient.

As a final note, the time to compile code has been reduced in DOLFINx. You can also add more control to the compilation, see:
https://jorgensd.github.io/dolfinx-tutorial/chapter4/compiler_parameters.html

Also how do the computations scale when increasing the samples? We tried a simple differential equation and it wasn’t linear. Or does this depend on many factors?

Everything depends on how you code up your problem. Without an example of how you have created your variational forms, I cannot give you any concrete advices on why you are not seeing linear behaviour.