It seems that this limitation was causing the building process to crash. Setting docker’s memory allocation to 4GB + 4GB of swap solved that problem and I was able to successfully install it according to Jonathan Lambrechts’s instructions (Can't start gmsh after installing via pip (#1023) · Issues · gmsh / gmsh · GitLab).
I still have one problem remaining thought. While the import of gmsh works perfectly fine in a terminal session after having added ‘/usr/local/lib’ to the PYTHONPATH variable, I can’t get jupyterlab’s variable to update. Inside a notebook,
import os
os.environ['PYTHONPATH']
yields /usr/local/dolfinx-real/lib/python3.8/dist-packages
.
import sys
sys.path.append('/usr/local/lib')
does not help either. I tried doing the export of the PYTHONPATH
in the ~/.bashrc
file however it looks like it would need to be performed before launching jupyterlab. Do you have any idea on how to do this?
You cant update pythonpath inside a running notebook, as this has to be done prior to running the container. An example of how to do this is shown below (spawning the complex build of PETSc/dolfin with gmsh)
docker run -v $(pwd):/root/shared -w "/root/shared" --rm --env LD_LIBRARY_PATH=/usr/local/dolfinx-complex/lib --env PATH=/usr/local/dolfinx-complex/bin:/usr/local/gmsh-4.6.0-Linux64-sdk/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin --env PKG_CONFIG_PATH=/usr/local/dolfinx-complex/lib/pkgconfig --env PETSC_ARCH=linux-gnu-complex-32 --env PYTHONPATH=/usr/local/dolfinx-complex/lib/python3.8/dist-packages:/usr/local/gmsh-4.6.0-Linux64-sdk/lib -p 8888:8888 dolfinx/lab
Awesome thanks!
To recap for anyone looking to run the docker image dolfinx/lab
with gmsh:
The container need to be launched using the command
docker run -ti --name=FEniCSX_real --env PYTHONPATH=/usr/local/dolfinx-real/lib/python3.8/dist-packages:/usr/local/lib -p 8888:8888 dolfinx/lab:latest
Then gmsh can be built from source by running the next three lines. Note that to complete this process, a sufficient amount of memory needs to be allocated by docker. 4GB + 4GB of swap were enough in my case.
apt-get update && apt-get install -y git cmake g++ gfortran python3 python3-numpy python3-scipy python3-pip libpetsc-complex-dev libslepc-complex3.12-dev libopenblas-dev libfltk1.3-dev libfreetype6-dev libgl1-mesa-dev libxi-dev libxmu-dev mesa-common-dev libhdf5-dev libcgns-dev libxft-dev libxinerama-dev libxcursor-dev libxfixes-dev libocct-foundation-dev libocct-data-exchange-dev libocct-ocaf-dev libopenmpi-dev libboost-dev && apt-get clean
git clone https://github.com/MmgTools/mmg.git && cd mmg && mkdir build && cd build && cmake -DBUILD_SHARED_LIBS=1 .. && make -j8 && make install && cd ../.. && rm -rf mmg
git clone https://gitlab.onelab.info/gmsh/gmsh.git && cd gmsh && mkdir build && cd build && cmake -DCMAKE_BUILD_TYPE=Release -DENABLE_BUILD_SHARED=1 -DENABLE_BUILD_DYNAMIC=1 .. && make -j8 shared gmsh && make install/fast && cd ../.. && rm -rf gmsh
Thank you for your help!
Hi. Does this procedure also work in the case of the docker image dolfinx/dolfinx ? Did you also manage the installation of pyvista in the docker arm64 image on mac M1 ?
Yes, the procedure to install gmsh will work for both dolfinx/dolfinx and dolfinx/lab on arm64 (e.g. M1 Mac).
No one has posted instructions for PyVista on arm64 yet, the weak link is that there is no pip installable vtk package.
https://vtkpythonpackage.readthedocs.io/en/latest/Automated_wheels_building_with_scripts.html#linux
Good day. I want to share some tests of the single-core performance of FenicsX on an M1 Mac (arm64):
I run the demo of the heat equation (without visualization in pyvista )
https://jorgensd.github.io/dolfinx-tutorial/chapter2/diffusion_code.html#
with the parameters:
t = 0 # Start time
T = 2.0 # Final time
num_steps = 1000
dt = T / num_steps # time step size
nx, ny = 500, 500 # Mesh sizes
The computation time of this experiment on an M1 Mac with 16 Gb RAM was about 17minutes. On the other hand, on a Pc with Intel core i9 10900K (3.7Ghz, 5.3Ghz Turbo boost, 20 Mb Cache)), 32 Gb RAM, the same simulation takes 23 minutes.
Does anyone have tests of the single-core performance of FenicX on an AMD Ryzen 5000 processor?
There are several things you chan do to speed up your code.
First thing is to change the solver from a direct to an iterative solver.
You can also add additional form compiler parameters to optimize the generated assembly code, which can really increase the performance of your code.
I’ve added a section on this in my tutorial: JIT Parameters — FEniCS-X tutorial
Wow, that’s quite impressive.
I’d like to mention that we’re building both the x86-64 and ARM64 images with the absolute baseline optimisations at the moment (i.e. -O2). Particularly for ARM64 I am still trying to understand the march/mtune flags. The meaning of march/mtune on ARM64 is not the same as for x86 architectures (https://community.arm.com/developer/tools-software/tools/b/tools-software-ides-blog/posts/compiler-flags-across-architectures-march-mtune-and-mcpu).
You may get considerable better performance building your own images from the Dockerfile and passing the native march/mtune/mcpu flags for your architecture, see:
The ARM64 images now include gmsh.
I tried to install FEniCSx natively on Mac of Apple Silicon, but failed at the step of installing petsc4py.
Please add the recipe you have been using, and the relevant output/error trace
Thank you !
I am currently using MacOS Sonoma 14.1, Apple M2. Having experience with legacy FEniCS, I am transitioning to FEniCSx. It is necessary for me to customize the source code of dolfinx.
-
At first, I followed the GitHub Action workflow. In the following command,
arch -x86_64
did not work for arm.PETSC_DIR=/Users/pengfei/petsc PETSC_ARCH=arch-darwin-c-opt arch -x86_64 python -m pip install --no-cache-dir -v .
-
Also follow the GitHub Action workflow, I tried to skip the step of installing petsc4py, but I found it is difficult to install slepc and parmetis with homebrew.
-
I tried to use Spack. Here is my
spack.yaml
:
spack:
specs:
- fenics-dolfinx+adios2
- py-fenics-dolfinx cflags=-O3 fflags=-O3
view: true
concretizer:
unify: true
compilers:
- compiler:
spec: apple-clang@=15.0.0
paths:
cc: /usr/bin/clang
cxx: /usr/bin/clang++
f77: /opt/homebrew/bin/gfortran-13
fc: /opt/homebrew/bin/gfortran-13
flags: {}
operating_system: sonoma
target: aarch64
modules: []
environment: {}
extra_rpaths: []
There might be some problems about fortran compiler. Here is the error message:
==> Installing openblas-0.3.26-sx3q4pbkqpzl44pnook2lspq44fofbwn [50/89]
==> No binary for openblas-0.3.26-sx3q4pbkqpzl44pnook2lspq44fofbwn found: installing from source
==> Using cached archive: /Users/pengfei/Documents/GitHub/spack/var/spack/cache/_source-cache/archive/4e/4e6e4f5cb14c209262e33e6816d70221a2fe49eb69eaf0a06f065598ac602c68.tar.gz
==> No patches needed for openblas
==> openblas: Executing phase: 'edit'
==> openblas: Executing phase: 'build'
==> Error: ProcessError: Command exited with status 2:
'make' '-j8' '-s' 'CC=/Users/pengfei/Documents/GitHub/spack/lib/spack/env/clang/clang' 'FC=/Users/pengfei/Documents/GitHub/spack/lib/spack/env/clang/gfortran' 'MAKE_NB_JOBS=0' 'ARCH=arm64' 'DYNAMIC_ARCH=1' 'DYNAMIC_OLDER=1' 'TARGET=GENERIC' 'USE_LOCKING=1' 'USE_OPENMP=0' 'USE_THREAD=0' 'RANLIB=ranlib' 'all'
3 errors found in build log:
13701 clang: warning: -Wl,-ld_classic: 'linker' input unused [-Wunused-command-line-argument]
13702 clang: warning: -Wl,-ld_classic: 'linker' input unused [-Wunused-command-line-argument]
13703 clang: warning: -Wl,-ld_classic: 'linker' input unused [-Wunused-command-line-argument]
13704 clang: warning: -Wl,-ld_classic: 'linker' input unused [-Wunused-command-line-argument]
13705 clang: warning: -Wl,-ld_classic: 'linker' input unused [-Wunused-command-line-argument]
13706 ld: library not found for -ld_classic
>> 13707 collect2: error: ld returned 1 exit status
>> 13708 make[1]: *** [Makefile:155: libopenblas-r0.3.26.dylib] Error 1
>> 13709 make: *** [Makefile:145: shared] Error 2
See build log for details:
/var/folders/s7/j2wq1y9x71nbxvc7v6y5jp2c0000gn/T/pengfei/spack-stage/spack-stage-openblas-0.3.26-sx3q4pbkqpzl44pnook2lspq44fofbwn/spack-build-out.txt
- I also tried to install it with conda, but failed to use.
>>> from dolfinx import *
[0]PETSC ERROR: ------------------------------------------------------------------------
[0]PETSC ERROR: Caught signal number 4 Illegal instruction: Likely due to memory corruption
[0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[0]PETSC ERROR: or see https://petsc.org/release/faq/#valgrind and https://petsc.org/release/faq/
[0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run
[0]PETSC ERROR: to get more information on the crash.
Abort(59) on node 0 (rank 0 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0
- I return to step 2. I found slepc and parmetis have been install by conda.
I haven’t succeed, I am still trying.