FeniCs on Apple silicon M1

It seems that this limitation was causing the building process to crash. Setting docker’s memory allocation to 4GB + 4GB of swap solved that problem and I was able to successfully install it according to Jonathan Lambrechts’s instructions (Can't start gmsh after installing via pip (#1023) · Issues · gmsh / gmsh · GitLab).

I still have one problem remaining thought. While the import of gmsh works perfectly fine in a terminal session after having added ‘/usr/local/lib’ to the PYTHONPATH variable, I can’t get jupyterlab’s variable to update. Inside a notebook,

import os
os.environ['PYTHONPATH']

yields /usr/local/dolfinx-real/lib/python3.8/dist-packages.

import sys
sys.path.append('/usr/local/lib')

does not help either. I tried doing the export of the PYTHONPATH in the ~/.bashrc file however it looks like it would need to be performed before launching jupyterlab. Do you have any idea on how to do this?

You cant update pythonpath inside a running notebook, as this has to be done prior to running the container. An example of how to do this is shown below (spawning the complex build of PETSc/dolfin with gmsh)

docker run -v $(pwd):/root/shared -w "/root/shared" --rm --env LD_LIBRARY_PATH=/usr/local/dolfinx-complex/lib --env PATH=/usr/local/dolfinx-complex/bin:/usr/local/gmsh-4.6.0-Linux64-sdk/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin --env PKG_CONFIG_PATH=/usr/local/dolfinx-complex/lib/pkgconfig --env PETSC_ARCH=linux-gnu-complex-32 --env PYTHONPATH=/usr/local/dolfinx-complex/lib/python3.8/dist-packages:/usr/local/gmsh-4.6.0-Linux64-sdk/lib -p 8888:8888 dolfinx/lab

1 Like

Awesome thanks!
To recap for anyone looking to run the docker image dolfinx/lab with gmsh:
The container need to be launched using the command

docker run -ti --name=FEniCSX_real --env PYTHONPATH=/usr/local/dolfinx-real/lib/python3.8/dist-packages:/usr/local/lib -p 8888:8888 dolfinx/lab:latest

Then gmsh can be built from source by running the next three lines. Note that to complete this process, a sufficient amount of memory needs to be allocated by docker. 4GB + 4GB of swap were enough in my case.

apt-get update && apt-get install -y git cmake g++ gfortran python3 python3-numpy python3-scipy python3-pip libpetsc-complex-dev libslepc-complex3.12-dev libopenblas-dev libfltk1.3-dev libfreetype6-dev libgl1-mesa-dev libxi-dev libxmu-dev mesa-common-dev libhdf5-dev libcgns-dev libxft-dev libxinerama-dev libxcursor-dev libxfixes-dev libocct-foundation-dev libocct-data-exchange-dev libocct-ocaf-dev libopenmpi-dev libboost-dev && apt-get clean
git clone https://github.com/MmgTools/mmg.git && cd mmg && mkdir build && cd build && cmake -DBUILD_SHARED_LIBS=1 .. && make -j8 && make install && cd ../.. && rm -rf mmg
git clone https://gitlab.onelab.info/gmsh/gmsh.git && cd gmsh && mkdir build && cd build && cmake -DCMAKE_BUILD_TYPE=Release -DENABLE_BUILD_SHARED=1 -DENABLE_BUILD_DYNAMIC=1 .. && make -j8 shared gmsh && make install/fast && cd ../.. && rm -rf gmsh

Thank you for your help!

Hi. Does this procedure also work in the case of the docker image dolfinx/dolfinx ? Did you also manage the installation of pyvista in the docker arm64 image on mac M1 ?

Yes, the procedure to install gmsh will work for both dolfinx/dolfinx and dolfinx/lab on arm64 (e.g. M1 Mac).

No one has posted instructions for PyVista on arm64 yet, the weak link is that there is no pip installable vtk package.

https://vtkpythonpackage.readthedocs.io/en/latest/Automated_wheels_building_with_scripts.html#linux

Good day. I want to share some tests of the single-core performance of FenicsX on an M1 Mac (arm64):

I run the demo of the heat equation (without visualization in pyvista )

https://jorgensd.github.io/dolfinx-tutorial/chapter2/diffusion_code.html#

with the parameters:

t = 0 # Start time
T = 2.0 # Final time
num_steps = 1000
dt = T / num_steps # time step size

nx, ny = 500, 500 # Mesh sizes

The computation time of this experiment on an M1 Mac with 16 Gb RAM was about 17minutes. On the other hand, on a Pc with Intel core i9 10900K (3.7Ghz, 5.3Ghz Turbo boost, 20 Mb Cache)), 32 Gb RAM, the same simulation takes 23 minutes.

Does anyone have tests of the single-core performance of FenicX on an AMD Ryzen 5000 processor?

There are several things you chan do to speed up your code.
First thing is to change the solver from a direct to an iterative solver.

You can also add additional form compiler parameters to optimize the generated assembly code, which can really increase the performance of your code.
I’ve added a section on this in my tutorial: JIT Parameters — FEniCS-X tutorial

Wow, that’s quite impressive.

I’d like to mention that we’re building both the x86-64 and ARM64 images with the absolute baseline optimisations at the moment (i.e. -O2). Particularly for ARM64 I am still trying to understand the march/mtune flags. The meaning of march/mtune on ARM64 is not the same as for x86 architectures (https://community.arm.com/developer/tools-software/tools/b/tools-software-ides-blog/posts/compiler-flags-across-architectures-march-mtune-and-mcpu).

You may get considerable better performance building your own images from the Dockerfile and passing the native march/mtune/mcpu flags for your architecture, see:

2 Likes

The ARM64 images now include gmsh.

1 Like