Installing FEniCS on Computing cluster (Red Hat)

Dear all,

On the FEniCS website it is recommended to build FEniCS from source when using it for hpc applications.
Is there a particular reason for this? Does building FEniCS from source have advantages, compared to Anaconda/Docker installation, when installing FEniCS on a hpc platform?

The reason I am asking is because on the computing platform I am using, I attempted building FEniCS from source and to install certain packages I do "yum install ". I do not have root access, so the system blocks me from performing this step. If I could use anaconda or docker without any consequences for computational efficiency, that would be most convenient.

With kind regards,

Dear Tkon,

I am trying to install FEniCS in a cluster via the method from source. I followed the instruction described in its web but it dose not work. Have you solved your problem and successfully built it in your cluster?

In fact, i used fenics via Docker container in my personal computer and am lost by the installation from source.

Thanks in advance

Installing FEniCS from source without admin privileges on a cluster would be quite difficult.

You cannot run a docker image on a shared cluster, because docker images provide the user with root access inside the image and this is not acceptable in a shared environment like a HPC cluster. Singularity was built exactly for this purpose. Once you have singularity installed on the cluster, you can use the command

singularity pull --name fenics.sif docker://quay.io/fenicsproject/stable:latest

to pull from docker and build a singularity image. You can use this to run your code, just like with docker images, using the command

singularity exec fenics.sif python3 my_code.py

To build fenicsx, you would have to pull the appropriate image from docker.

singularity pull --name fenicsx.sif docker://dolfinx/dolfinx

See this link for more information.

1 Like

For what it’s worth, the Debian and Ubuntu packaging for FEniCS is designed to be ready to run as-installed on cluster installations. Not much help if you don’t have control of the cluster operating system, but cloud images are available for use with cloud installations (e.g. systems managed via OpenStack).

Dear teja781,

Thanks for your kind answer and instructions. It becomes clear for me that the Singularity could solve the problem of Docker in HPC cluster. However, the admin tells me that the Docker is not recommended right now for an installation in our young cluster, and he has installed the FEniCS with conda which works only for my personal account.

I wonder if this kind of installation could work for parallel computing in a HPC cluster? In fact I have just sent a test with 4 cores, i.e.
mpirun -np 4 python3 ${JOBNAME}.py
which is under computation.

But when I open the results folder, it seems not very good, since the computation seems not so fast compared to the case with only 1 core.

Thanks again in advance.

Dear dparsons,

Thanks for your answer. The Fenics has been installed in our cluster with conda and in my personal account. As a beginner of Fenics with a cluster, what do you think about this kind of installation for the parallel computing?

working with conda would need the python environment (virtualenv) to be carefully set, though that can probably be done easily enough. The important thing is that the MPI implementation it is built against supports running across multiple hosts.

Try setting OMP_NUM_THREADS=1 and see if that helps improve MPI performance.

Thanks! I will try it and report back once it works correctly.

You don’t have to install Docker on the cluster, only singularity. Singularity can work directly with docker images.

I found this to be the case too. I gave up on anaconda, after I saw on the fenics website that it is not officially supported. Hopefully you’ll have better luck with help from other comments.

Finally, our computation center told me we have Singularity installed in the cluster. So the problème solved! Thanks again for your suggestion!