I am running the example “demo/cpp/documented/poisson/cpp/main.cpp”, where I have changed the mesh size to “UnitSquareMesh::create({{512,512}}” and added high resolution timers around “solve(a == L, u, bc);” for profiling.
First I run serial “./demo_poisson”, then “mpirun -n 4 ./demo_poission”.
The serial solve takes about 2.3 seconds, the MPI run 2.5 seconds per process.
Should the parallel run not be at least 2-3 times faster than the serial run? I observed similar timings when I installed FEniCS using conda on the same machine natively, so that docker seems not to be the problem. I know that some problems are memory-bandwidth-limited when several processes access the same RAM. Can you recommend a demo, where a significant speedup is expected, if this is the most likely problem?
Thank you for the links. However, I don’t find an explanation in there.
Yes, docker is not optimal in HPC environments, but I was asking about a single quad-core CPU. And even docker shows almost perfect efficiency in the HPGMG-FE benchmark as shown in the first paper. But I don’t find a discussion, why the runtime gets worse when going from 24 for 192 processors for the FEniCS benchmark (Fig. 3 of the first paper).
This seems to be a similar problem as I observed on a single CPU, but why??? Is this a FEniCS implementation problem?
bmf, setting OMP_NUM_THREADS=1 is a very good point, but it didn’t change anything in my tests. I have tested a few other demos, and all of them run slower or ot most ~10% faster on 4 MPI processes on the same quadcore CPU compared to the serial run.
(This has nothing to do with docker. The behaviour is the same if I run a native conda installation.)
I’d appreciate if somebody could provide a simple FEniCS benchmark that shows good scalability on a single desktop CPU. I know that I might have configured FEniCS in the wrong way or that I am using the wrong solver settings. But if none of the demos profit from MPI, there seems to be a flaw somewhere in FEniCS.
As you are using the solve command, with default options, you cannot expect good performance for larger problems, as it uses a direct solver by default.
Thanks, but I also tested this one already. The walltime for the instruction “solver.solve(*u.vector(), b);” is 2.0 seconds for the serial run, and 3.8 seconds for the parallel run with “mpirun -n 4”. (mesh size increased to 512x512 for both).
Can you please try yourself and report your timing results?
Another thing to note is that the solve time is not very large, with quite a lot of variability of you run the code multiple times. (i used the quay.io/fenicsproject/dev:latest docker image).
Thank you for the detailed numbers. It’s good to see that timings improve on multi-processor machines, but it makes the numbers more difficult to compare to my case. I am pretty sure that other processes and timing variablity has no significant effect on my timings.
Your results show slightly better scalability than mine, but a speedup of about 2 when going from 1 to 4 processes is not really impressive. I’m not sure if this is special about the singular-poisson example or about FEniCS. But PETSc in general shows close to 100% efficiency (>98% or >99.8%, I am not sure) on poisson examples of ~1 seconds runtime, this I remember from our classes.
So, I’d like to see some benchmark with a speedup of at least 3.6 from 1 to 4 processes. Otherwise I’ll need to look for other FEM packages.
FEniCS is using PETSc under the hood for solving problems. As your problems are still of a relatively small size (less than 10^6 dofs), and a solution time of 1 second I dont really see a problem wrt. Scaling.
As you can see in the turbomachinery paper that nate referenced above, fenics has been used for HPC problems with over 200 million dofs, see figure 9, page 15.
I like that you dont even look at the other results showing even better scaling, But its your choice. Good luck with finding a software suitable for your requirements. I would suggest Deal ii, Firedrake , Dune or Freefem++ of the top of my head. There is a big jungle out there and i have probably missed 50 other large software.
If I remember correctly, solve, without specifying a solver string, will use the default linear solver. In FEniCS, it seems to be lu, which is a serial direct solver.