I am running fenics in a Docker container on Arch Linux. In the past, I have used the conda version of fenics. I have vague memories that the conda version would automatically use all the cores of my laptop when I would solve a problem with large number of elements. On the other hand, the current version in Docker container uses only one core. My question is if anyone else has observed the same thing and how to make the Docker container version use all the cores? I tried changing the number of CPUs allowed to the docker run command but that does not change anything.
Thank you very much. I tried mpirun. My driver reads the mesh from a XDMF file that stores the mesh in a HDF5 file. I got a bunch of errors:
HDF5-DIAG: Error detected in HDF5 (1.10.0-patch1) MPI-process 0:
#000: ../../../src/H5F.c line 491 in H5Fcreate(): unable to create file
major: File accessibilty
minor: Unable to open file
#001: ../../../src/H5Fint.c line 1168 in H5F_open(): unable to lock the file or initialize file structure
major: File accessibilty
minor: Unable to open file
#002: ../../../src/H5FD.c line 1821 in H5FD_lock(): driver lock request failed
major: Virtual File Layer
minor: Cannot update object
#003: ../../../src/H5FDsec2.c line 939 in H5FD_sec2_lock(): unable to flock file, errno = 11, error message = 'Resource temporarily unavailable'
major: File accessibilty
minor: Bad file ID accessed
HDF5-DIAG: Error detected in HDF5 (1.10.0-patch1) MPI-process 0:
#000: ../../../src/H5F.c line 491 in H5Fcreate(): unable to create file
major: File accessibilty
minor: Unable to open file
#001: ../../../src/H5Fint.c line 1168 in H5F_open(): unable to lock the file or initialize file structure
major: File accessibilty
minor: Unable to open file
#002: ../../../src/H5FD.c line 1821 in H5FD_lock(): driver lock request failed
major: Virtual File Layer
minor: Cannot update object
#003: ../../../src/H5FDsec2.c line 939 in H5FD_sec2_lock(): unable to flock file, errno = 11, error message = 'Resource temporarily unavailable'
major: File accessibilty
minor: Bad file ID accessed
HDF5-DIAG: Error detected in HDF5 (1.10.0-patch1) MPI-process 0:
#000: ../../../src/H5F.c line 491 in H5Fcreate(): unable to create file
major: File accessibilty
minor: Unable to open file
#001: ../../../src/H5Fint.c line 1168 in H5F_open(): unable to lock the file or initialize file structure
major: File accessibilty
minor: Unable to open file
#002: ../../../src/H5FD.c line 1821 in H5FD_lock(): driver lock request failed
major: Virtual File Layer
minor: Cannot update object
#003: ../../../src/H5FDsec2.c line 939 in H5FD_sec2_lock(): unable to flock file, errno = 11, error message = 'Resource temporarily unavailable'
major: File accessibilty
minor: Bad file ID accessed
The driver runs in spite of these messages and uses 4 threads!
Thank you for your reply. I started a Bash shell in the docker container and did echo $OMP_NUM_THREADS. I got an empty line which means the this environment variable is not set. I tried setting it to 4 and then run my code. But it was still using only one core of my laptop. But using mpirun -n4 indeed gave me 4 threads with OMP_NUM_THREADS=1.