SciPy in docker container not parallel

Hello,

Usage
I use FEniCS 2019 in the original docker image quay.io/fenicsproject/stable:current to assemble an eigenvalue problem with sparse matrices, export it to scipy.sparse.csr_matrix and solve it with scipy.sparse.linalg.eigs.

PC configuration

  • Debian 11
  • architecture: x86_64
  • 8 hyperthreaded processors (16 threads in total)
  • 24GB RAM
  • Yes, you guessed it. The PC is very old.

Problem
The eigenvalue solver (ARPACK) called by SciPy’s eigs only runs on a single thread. This is a serious problem for me, because due to the small RAM of the PC and the size of my eigenproblems I cannot run more than 2 independent jobs. Therefore most of my computational capacity stays idle. On other machines with Ubuntu, eigs uses all available threads.

Expected behaviour
ARPACK in the docker container should use all (or at least more than one) available threads.

Question
Is this normal, or is something wrong with my PC (like outdated hardware)? Is there any recommended customization (perhaps something like [this](https://fenicsproject.discourse.group/t/how-to-add-a-python-module-to-the-fenics-docker-container/1665/3?u=lukas-babor)) of the container to enable parallel ARPACK?

Thanks for any suggestions :slight_smile:

Is there a particular reason you’re married to using the scipy interface with concurrency rather than the parallelism offered by SLEPc’s EPS?

Thanks for the suggestion. I forgot to mention that my eigenvalue problem is complex. I was too scared to follow your advice, since I cannot even pronounce sesquilinear, let alone understanding what it means. In the end my colleague who wrote the code just followed the recommendation from this post.

Below is a (not so minimal) example. Is it straight forward to solve it with SLEPc? The usual problems that we solve are more complicated, so I do not want to make any violent changes to our well tested code.

from   dolfin              import *   # overwrites local variables with name conflict! 
import numpy                   as np
import scipy.sparse            as sp
import matplotlib.pyplot       as plt
from   scipy.sparse.linalg import eigs

Re = Constant(896.425)
eta = 1.135
d = eta-1.0
Nr = 20
velocity_order = 2
kz = Constant(1/2/d)
k2pi = kz*2*np.pi
num_eig = 10

class InnerCylinder(SubDomain):
    def inside(self,x,on_boundary):
        return near(x[0],1.0)
    
class OuterCylinder(SubDomain):
    def inside(self,x,on_boundary):
        return near(x[0],eta)

mesh = IntervalMesh(Nr,1.0,eta)
x = SpatialCoordinate(mesh)
r = x[0]

# Function space
V0 = FunctionSpace(mesh, "CG", velocity_order  )
Q0 = FunctionSpace(mesh, "CG", velocity_order-1)
Vp = VectorElement("CG", mesh.ufl_cell(), velocity_order  , dim=3)
Qp = FiniteElement("CG", mesh.ufl_cell(), velocity_order-1)        
Wp = FunctionSpace(mesh, MixedElement(Vp,Qp))
up, pp = TrialFunctions(Wp)
vp, qp = TestFunctions(Wp)
ur,uphi,uz = split(up) # here the order of velocity components is more-less arbitrary
vr,vphi,vz = split(vp) # but for 2D and 3D meshes it must match the order of coordinates in FEniCS !

# Analytical basic state
c1,c2 = -1/(eta**2-1), eta**2/(eta**2-1)
Uphi_exact = Expression('c1*x[0]+c2/x[0]',c1=c1,c2=c2,degree=2)
P_exact = Expression('pow(A,2)*pow(x[0],2)/2+2*A*B*std::log(x[0])-pow(B,2)/2/pow(x[0],2)-(pow(A,2)-pow(B,2))/2',
                    A=c1,B=c2,degree=1)
Uphi,P = Function(V0), Function(Q0)
Uphi.interpolate(Uphi_exact)
P.interpolate(P_exact)

# Boundary conditions
bcs = [DirichletBC(Wp.sub(0), Constant((0.0,0.0,0.0)), OuterCylinder())
      ,DirichletBC(Wp.sub(0), Constant((0.0,0.0,0.0)), InnerCylinder())
      ]

# Variational forms
Ji = -vz*k2pi*pp *r*dx # z-momentum: pressure gradient
Ji+= -uz*k2pi*qp *r*dx # continuity, z-direction
J  = ( 2*Uphi*uphi*vr/r +pp*(vr.dx(0)+vr/r) ) *r*dx # r: convection + pressure
J += -vphi*ur *( Uphi.dx(0) +Uphi/r ) *r*dx     # phi: convection
J += 1/Re *( -vr.dx(0)*ur.dx(0) -k2pi**2*ur*vr -vr*ur/r**2 ) *r*dx # r
J += 1/Re *( -vphi.dx(0)*uphi.dx(0) -k2pi**2*uphi*vphi -vphi*uphi/r**2 ) *r*dx #phi
J += 1/Re *( -vz.dx(0)*uz.dx(0) -k2pi**2*uz*vz                   ) *r*dx # z: Laplacian
J += -qp *( ur.dx(0) +ur/r ) *r*dx # continuity
m = dot(up,vp) *r*dx

# Create matrices in FEniCS
A, Ai, M = PETScMatrix(), PETScMatrix(), PETScMatrix() # initialize empty matrices in FEniCS
assemble(J , tensor=A ) # populate them with coefficients from the weak forms
assemble(Ji, tensor=Ai)
assemble(m , tensor=M ) 

# Impose Dirichlet BCs
bcinds = []
for bc in bcs:
    bc.apply(A )
    bc.apply(Ai)
    bc.apply(M )
    bcdict = bc.get_boundary_values()
    bcinds.extend(bcdict.keys())    

# Export matrices from FEniCS into sparse matrices for SciPy
Ar   = sp.csr_matrix( A.mat().getValuesCSR()[::-1])
Ai   = sp.csr_matrix(Ai.mat().getValuesCSR()[::-1])
M    = sp.csr_matrix( M.mat().getValuesCSR()[::-1])

# Create shift matrix  -> take care of the dirichlet boundary conditions
# otherwise one gets ficticious eigenvalues equal to 1
shift =  1.2345e6*np.ones(len(bcinds)) # large arbitrary multiplier
S = sp.csr_matrix((shift, (bcinds, bcinds)), shape=Ar.shape)       

# Complex matrix
A = Ar + 1j*Ai + S

# EGV solver
v, V = eigs(A, num_eig, M, sigma=1.0)

# order eigenvalues from largest to smallest real part
idxs = np.argsort(-np.real(v))    
v,V = v[idxs], V[:,idxs[0]]
print(v[0])

Apologies that this discussion went off-topic. Returning back to my original issue: I have noticed that even dolfin.solve(F==0,w,bcs=bcs), where F is a non-linear weak form, runs only on a single thread on my PC.

Outside of docker I was able to build NumPy and SciPy with parallel support, following a workflow like this (install OpenBLAS and Cython, then build NumPy and SciPy from source).

The question is the same as in the original post.

If I understand correctly, the docker environment uses Ubuntu as package manager. In that case scipy in docker (if it’s the one provided by ubuntu) would already have BLAS threading support. Which blas is installed in your docker container?

$ apt list --installed | grep blas
libopenblas-base/bionic,now 0.2.20+ds-4 amd64 [installed,automatic]
libopenblas-dev/bionic,now 0.2.20+ds-4 amd64 [installed]

$ apt list --installed | grep lapack
liblapack-dev/bionic,now 3.7.1-4ubuntu1 amd64 [installed]
liblapack3/bionic,now 3.7.1-4ubuntu1 amd64 [installed,automatic]

There’s something strange in your blas list, the openblas installation is incomplete. If libopenblas-dev is installed, then it should trigger installation of libopenblas-pthread-dev, libopenblas-openmp-dev or libopenblas-serial-dev, which in turn will install the packages providing the libraries (e.g. libopenblas0-pthread or libopenblas0-openmp). Maybe it was a bug in the openblas packaging for bionic, which got fixed later.

To get thread parallelization you need the pthread or openmp variant of the package. If docker is not getting it automatically, then install it manually (add the installation instruction to the docker script) , installing libopenblas-pthread-dev or libopenblas-openmp-dev.

1 Like

Dear dparsons,

thank you for the hint. Unfortunately these packages are only available since Ubuntu 20, while the FEniCS image uses Ubuntu 18. Thus I get the error E: Unable to locate package libopenblas-openmp-dev.

But I understand your point. This looks more like an Ubuntu issue, rather than a FEniCS problem. The solution would be to customize the container and install all the essential packages for parallelized linear algebra. It would probably be even faster to write my own docker image from scratch and use Ubuntu 20. Unfortunately, I do not have time to try it now, but I will return to this in a few months. I am surprised that I am the only affected user though.

In the meantime I use a quick and dirty fix: I save the matrices computed inside the container into the shared folder, and then solve the eigenvalue problem outside of the container.

@Lukas-Babor I made a dockerfile for ubuntu 20.04 a while back (hopefully it still works).
See: How to install dolfin on Ubuntn-20.04 from source - #5 by dokken

Most users would get FENiCS parallelization from the use of MPI, not from threading. In fact MPI performance is generally better with OMP_NUM_THREADS=1

Dear @dokken

Thank you very much! For now I received an error (see below), but at least this is a good starting point. I expect that I just need to specify an older version of UFL?

Step 26/33 : RUN git clone https://github.com/FEniCS/fiat.git &&     git clone https://bitbucket.com/fenics-project/ffc.git &&     git clone https://github.com/FEniCS/ufl.git && 	git clone https://bitbucket.com/fenics-project/dolfin.git && 	git clone https://bitbucket.com/fenics-project/dijitso.git
 ---> Running in 8055e2f2b915
Cloning into 'fiat'...
Cloning into 'ffc'...
warning: redirecting to https://bitbucket.org/fenics-project/ffc.git/
Cloning into 'ufl'...
Cloning into 'dolfin'...
warning: redirecting to https://bitbucket.org/fenics-project/dolfin.git/
Cloning into 'dijitso'...
warning: redirecting to https://bitbucket.org/fenics-project/dijitso.git/
Removing intermediate container 8055e2f2b915
 ---> d0aaf02c7659
Step 27/33 : RUN cd fiat && pip3 install --no-cache-dir . &&     cd ../ufl && pip3 install --no-cache-dir . && 	cd ../dijitso && pip3 install --no-cache-dir . &&     cd ../ffc && pip3 install --no-cache-dir . &&     cd ../ && pip3 install --no-cache-dir ipython
 ---> Running in 3cb43140b9a2
Processing /src/fiat
Requirement already satisfied: numpy in /usr/local/lib/python3.8/dist-packages (from fenics-fiat==2019.2.0.dev0) (1.21.5)
Collecting sympy
  Downloading sympy-1.9-py3-none-any.whl (6.2 MB)
Collecting mpmath>=0.19
  Downloading mpmath-1.2.1-py3-none-any.whl (532 kB)
Building wheels for collected packages: fenics-fiat
  Building wheel for fenics-fiat (setup.py): started
  Building wheel for fenics-fiat (setup.py): finished with status 'done'
  Created wheel for fenics-fiat: filename=fenics_fiat-2019.2.0.dev0-py3-none-any.whl size=125565 sha256=bd3035eaf34f7f99b2d041ecb7446a376973b8a63c4832a40a0e58517f5a2060
  Stored in directory: /tmp/pip-ephem-wheel-cache-tjcrnng5/wheels/0e/a4/58/b60a9f631a851d2e5e598f319bd944d6fdb90a08f52843fbb7
Successfully built fenics-fiat
Installing collected packages: mpmath, sympy, fenics-fiat
Successfully installed fenics-fiat-2019.2.0.dev0 mpmath-1.2.1 sympy-1.9
Processing /src/ufl
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
    Preparing wheel metadata: started
    Preparing wheel metadata: finished with status 'done'
Requirement already satisfied: numpy in /usr/local/lib/python3.8/dist-packages (from fenics-ufl==2021.2.0.dev0) (1.21.5)
Building wheels for collected packages: fenics-ufl
  Building wheel for fenics-ufl (PEP 517): started
  Building wheel for fenics-ufl (PEP 517): finished with status 'done'
  Created wheel for fenics-ufl: filename=fenics_ufl-2021.2.0.dev0-py3-none-any.whl size=258220 sha256=d210c9f2e488d43ca62ca6eaf3471fb8a71549e0e97a1303392d7a6ed68e2237
  Stored in directory: /tmp/pip-ephem-wheel-cache-rx30jmt1/wheels/64/e0/64/0f85376f52a232f1b2507dc4fb041bd35465f4c4c4838a77ee
Successfully built fenics-ufl
Installing collected packages: fenics-ufl
Successfully installed fenics-ufl-2021.2.0.dev0
Processing /src/dijitso
Requirement already satisfied: numpy in /usr/local/lib/python3.8/dist-packages (from fenics-dijitso==2019.2.0.dev0) (1.21.5)
Building wheels for collected packages: fenics-dijitso
  Building wheel for fenics-dijitso (setup.py): started
  Building wheel for fenics-dijitso (setup.py): finished with status 'done'
  Created wheel for fenics-dijitso: filename=fenics_dijitso-2019.2.0.dev0-py3-none-any.whl size=46773 sha256=981cc45cf0326f6c4302ec37811aa7cb88bcba24cc6886e414781e40b7b2a971
  Stored in directory: /tmp/pip-ephem-wheel-cache-0v_d9p2z/wheels/3b/07/be/4f674135a1eb5a5fd8495e76d4aa65337f282a6dbefa34e06d
Successfully built fenics-dijitso
Installing collected packages: fenics-dijitso
Successfully installed fenics-dijitso-2019.2.0.dev0
Processing /src/ffc
Requirement already satisfied: fenics-dijitso<2019.3,>=2019.2.0.dev0 in /usr/local/lib/python3.8/dist-packages (from fenics-ffc==2019.2.0.dev0) (2019.2.0.dev0)
Requirement already satisfied: fenics-fiat<2019.3,>=2019.2.0.dev0 in /usr/local/lib/python3.8/dist-packages (from fenics-ffc==2019.2.0.dev0) (2019.2.0.dev0)
Requirement already satisfied: fenics-ufl>=2021.1.0 in /usr/local/lib/python3.8/dist-packages (from fenics-ffc==2019.2.0.dev0) (2021.2.0.dev0)
Requirement already satisfied: numpy in /usr/local/lib/python3.8/dist-packages (from fenics-ffc==2019.2.0.dev0) (1.21.5)
Requirement already satisfied: sympy in /usr/local/lib/python3.8/dist-packages (from fenics-fiat<2019.3,>=2019.2.0.dev0->fenics-ffc==2019.2.0.dev0) (1.9)
Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.8/dist-packages (from sympy->fenics-fiat<2019.3,>=2019.2.0.dev0->fenics-ffc==2019.2.0.dev0) (1.2.1)
Building wheels for collected packages: fenics-ffc
  Building wheel for fenics-ffc (setup.py): started
  Building wheel for fenics-ffc (setup.py): finished with status 'done'
  Created wheel for fenics-ffc: filename=fenics_ffc-2019.2.0.dev0-py3-none-any.whl size=363381 sha256=70289384030187795c17488de42a086ad9bae2767db91975f9d8f44933f1ba73
  Stored in directory: /tmp/pip-ephem-wheel-cache-nk9tkk4r/wheels/92/85/95/ae76957910a5f39a8c2cd460f1d81d3647623cecea65c73f27
Successfully built fenics-ffc
Installing collected packages: fenics-ffc
Successfully installed fenics-ffc-2019.2.0.dev0
Collecting ipython
  Downloading ipython-8.1.1-py3-none-any.whl (750 kB)
Requirement already satisfied: setuptools>=18.5 in /usr/lib/python3/dist-packages (from ipython) (45.2.0)
Collecting pexpect>4.3; sys_platform != "win32"
  Downloading pexpect-4.8.0-py2.py3-none-any.whl (59 kB)
Collecting matplotlib-inline
  Downloading matplotlib_inline-0.1.3-py3-none-any.whl (8.2 kB)
Collecting pygments>=2.4.0
  Downloading Pygments-2.11.2-py3-none-any.whl (1.1 MB)
Collecting pickleshare
  Downloading pickleshare-0.7.5-py2.py3-none-any.whl (6.9 kB)
Collecting stack-data
  Downloading stack_data-0.2.0-py3-none-any.whl (21 kB)
Requirement already satisfied: decorator in /usr/lib/python3/dist-packages (from ipython) (4.4.2)
Collecting backcall
  Downloading backcall-0.2.0-py2.py3-none-any.whl (11 kB)
Collecting jedi>=0.16
  Downloading jedi-0.18.1-py2.py3-none-any.whl (1.6 MB)
Collecting traitlets>=5
  Downloading traitlets-5.1.1-py3-none-any.whl (102 kB)
Collecting prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0
  Downloading prompt_toolkit-3.0.28-py3-none-any.whl (380 kB)
Collecting ptyprocess>=0.5
  Downloading ptyprocess-0.7.0-py2.py3-none-any.whl (13 kB)
Collecting pure-eval
  Downloading pure_eval-0.2.2-py3-none-any.whl (11 kB)
Collecting asttokens
  Downloading asttokens-2.0.5-py2.py3-none-any.whl (20 kB)
Collecting executing
  Downloading executing-0.8.3-py2.py3-none-any.whl (16 kB)
Collecting parso<0.9.0,>=0.8.0
  Downloading parso-0.8.3-py2.py3-none-any.whl (100 kB)
Collecting wcwidth
  Downloading wcwidth-0.2.5-py2.py3-none-any.whl (30 kB)
Requirement already satisfied: six in /usr/lib/python3/dist-packages (from asttokens->stack-data->ipython) (1.14.0)
Installing collected packages: ptyprocess, pexpect, traitlets, matplotlib-inline, pygments, pickleshare, pure-eval, asttokens, executing, stack-data, backcall, parso, jedi, wcwidth, prompt-toolkit, ipython
  Attempting uninstall: pygments
    Found existing installation: Pygments 2.3.1
    Not uninstalling pygments at /usr/lib/python3/dist-packages, outside environment /usr
    Can't uninstall 'Pygments'. No files were found to uninstall.
Successfully installed asttokens-2.0.5 backcall-0.2.0 executing-0.8.3 ipython-8.1.1 jedi-0.18.1 matplotlib-inline-0.1.3 parso-0.8.3 pexpect-4.8.0 pickleshare-0.7.5 prompt-toolkit-3.0.28 ptyprocess-0.7.0 pure-eval-0.2.2 pygments-2.11.2 stack-data-0.2.0 traitlets-5.1.1 wcwidth-0.2.5
Removing intermediate container 3cb43140b9a2
 ---> 25bddba855d7
Step 28/33 : RUN cd dolfin &&     mkdir build &&     cd build &&     PETSC_ARCH=linux-gnu-real-32 cmake -G Ninja -DCMAKE_INSTALL_PREFIX=/usr/local/dolfin -DCMAKE_BUILD_TYPE=${DOLFIN_CMAKE_BUILD_TYPE} -DCMAKE_CXX_FLAGS=${DOLFIN_CMAKE_CXX_FLAGS} .. &&     ninja ${MAKEFLAGS} install &&     cd ../python &&     PETSC_ARCH=linux-gnu-real-32 pip3 install --target /usr/local/dolfin/lib/python3.8/dist-packages --no-dependencies .
 ---> Running in 11db897fa7b8
-- The C compiler identification is GNU 9.3.0
-- The CXX compiler identification is GNU 9.3.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Performing Test HAVE_PIPE
-- Performing Test HAVE_PIPE - Success
-- Performing Test HAVE_PEDANTIC
-- Performing Test HAVE_PEDANTIC - Success
-- Performing Test HAVE_DEBUG
-- Performing Test HAVE_DEBUG - Success
-- Performing Test HAVE_O2_OPTIMISATION
-- Performing Test HAVE_O2_OPTIMISATION - Success
-- Found MPI_C: /usr/lib/x86_64-linux-gnu/libmpich.so  
-- Found MPI_CXX: /usr/lib/x86_64-linux-gnu/libmpichcxx.so;/usr/lib/x86_64-linux-gnu/libmpich.so  
-- Found Boost: /usr/include (found version "1.71.0") found components: timer filesystem program_options iostreams chrono regex 
-- Found Eigen3: /usr/include/eigen3 (Required is at least version "3.2.90") 
-- Found PythonInterp: /usr/bin/python3 (found suitable version "3.8.10", minimum required is "3") 
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/usr/local/lib/python3.8/dist-packages/ffc/__init__.py", line 24, in <module>
    from ffc.compiler import compile_form, compile_element
  File "/usr/local/lib/python3.8/dist-packages/ffc/compiler.py", line 129, in <module>
    from ffc.codegeneration import generate_code
  File "/usr/local/lib/python3.8/dist-packages/ffc/codegeneration.py", line 37, in <module>
    import ffc.uflacs.language.cnodes as L
  File "/usr/local/lib/python3.8/dist-packages/ffc/uflacs/__init__.py", line 23, in <module>
    from ffc.uflacs.uflacsrepresentation import compute_integral_ir
  File "/usr/local/lib/python3.8/dist-packages/ffc/uflacs/uflacsrepresentation.py", line 26, in <module>
    from ffc.representationutils import initialize_integral_ir
  File "/usr/local/lib/python3.8/dist-packages/ffc/representationutils.py", line 28, in <module>
    from ufl.cell import cellname2facetname
ImportError: cannot import name 'cellname2facetname' from 'ufl.cell' (/usr/local/lib/python3.8/dist-packages/ufl/cell.py)
-- UFC could not be found. (missing: UFC_INCLUDE_DIRS UFC_VERSION UFC_VERSION_OK UFC_SIGNATURE) (Required is at least version "2019.2")
-- Found PkgConfig: /usr/bin/pkg-config (found version "0.29.1") 
-- Checking for one of the modules 'craypetsc_real;PETSc'
-- Test PETSC_TEST_RUNS with shared library linking - Success
-- Looking for sys/types.h
CMake Warning (dev) at /usr/share/cmake-3.16/Modules/CheckIncludeFile.cmake:80 (message):
  Policy CMP0075 is not set: Include file check macros honor
  CMAKE_REQUIRED_LIBRARIES.  Run "cmake --help-policy CMP0075" for policy
  details.  Use the cmake_policy command to set the policy and suppress this
  warning.

  CMAKE_REQUIRED_LIBRARIES is set to:

    /usr/lib/x86_64-linux-gnu/libmpich.so

  For compatibility with CMake 3.11 and below this check is ignoring it.
Call Stack (most recent call first):
  /usr/share/cmake-3.16/Modules/CheckTypeSize.cmake:230 (check_include_file)
  cmake/modules/FindPETSc.cmake:213 (check_type_size)
  CMakeLists.txt:301 (find_package)
This warning is for project developers.  Use -Wno-dev to suppress it.

-- Looking for sys/types.h - found
-- Looking for stdint.h
-- Looking for stdint.h - found
-- Looking for stddef.h
-- Looking for stddef.h - found
-- Check size of PetscInt
-- Check size of PetscInt - done
-- Found PETSc: TRUE (found suitable version "3.13.3", minimum required is "3.7") 
-- Checking for one of the modules 'crayslepc_real;SLEPc'
-- Test SLEPC_TEST_RUNS with shared library linking - Success
-- Found SLEPc: TRUE (found suitable version "3.13.3", minimum required is "3.7") 
-- ParMETIS could not be found/configured. (missing: PARMETIS_TEST_RUNS PARMETIS_INCLUDE_DIRS PARMETIS_VERSION PARMETIS_VERSION_OK) (Required is at least version "4.0.2")
-- SUNDIALS could not be found/configured. (missing: SUNDIALS_LIBRARIES SUNDIALS_TEST_RUNS SUNDIALS_INCLUDE_DIRS SUNDIALS_VERSION SUNDIALS_VERSION_OK) (Required is at least version "3")
-- Checking for package 'SCOTCH-PT'
-- Found SCOTCH (version 6.0.9)
CMake Warning (dev) at cmake/modules/FindSCOTCH.cmake:202 (set):
  implicitly converting 'TYPE' to 'STRING' type.
Call Stack (most recent call first):
  CMakeLists.txt:335 (find_package)
This warning is for project developers.  Use -Wno-dev to suppress it.

-- Performing test SCOTCH_TEST_RUNS
-- Performing test SCOTCH_TEST_RUNS - Success
-- Found SCOTCH: /usr/local/petsc/linux-gnu-real-32/lib/libptscotch.a;/usr/local/petsc/linux-gnu-real-32/lib/libscotch.a;/usr/local/petsc/linux-gnu-real-32/lib/libptscotcherr.a  
-- Checking for package 'AMD'
-- Looking for sgemm_
-- Looking for sgemm_ - found
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE  
-- Checking for package 'UMFPACK'
-- Checking for package 'AMD'
-- Checking for package 'CHOLMOD'
-- Checking for package 'AMD'
-- Found BLAS: /usr/lib/x86_64-linux-gnu/libopenblas.so
-- Looking for cheev_
-- Looking for cheev_ - found
-- Performing test CHOLMOD_TEST_RUNS - Failed
-- Performing test UMFPACK_TEST_RUNS - Failed
-- Performing Test UMFPACK_TEST_RUNS
-- Performing Test UMFPACK_TEST_RUNS - Failed
-- UMFPACK could not be found. Be sure to set UMFPACK_DIR. (missing: UMFPACK_TEST_RUNS) 
-- Checking for package 'CHOLMOD'
-- Checking for package 'AMD'
-- Found BLAS: /usr/lib/x86_64-linux-gnu/libopenblas.so
-- Performing test CHOLMOD_TEST_RUNS - Failed
-- CHOLMOD could not be found. Be sure to set CHOLMOD_DIR. (missing: CHOLMOD_TEST_RUNS) 
-- HDF5: Using hdf5 compiler wrapper to determine C configuration
-- Found HDF5: /usr/lib/x86_64-linux-gnu/hdf5/mpich/libhdf5.so;/usr/lib/x86_64-linux-gnu/libsz.so;/usr/lib/x86_64-linux-gnu/libz.so;/usr/lib/x86_64-linux-gnu/libdl.so;/usr/lib/x86_64-linux-gnu/libm.so (found version "1.10.4") found components: C 
-- Checking for Trilinos
-- Unable to find Trilinos (>= 12.4.0)
-- Trilinos could not be found
-- Found ZLIB: /usr/lib/x86_64-linux-gnu/libz.so (found version "1.2.11") 
-- The following features have been enabled:

 * BUILD_SHARED_LIBS, Build DOLFIN with shared libraries.
 * DOLFIN_AUTO_DETECT_MPI, Detect MPI automatically (turn this off to use the MPI compiler wrappers directly via setting CXX, CXX, FC).
 * DOLFIN_WITH_LIBRARY_VERSION, Build with library version information.
 * DOLFIN_ENABLE_DOCS, Enable generation of documentation.
 * CMAKE_INSTALL_RPATH_USE_LINK_PATH, Add paths to linker search and installed rpath.
 * DOLFIN_ENABLE_MPI, Compile with support for MPI.
 * DOLFIN_ENABLE_PETSC, Compile with support for PETSc.
 * DOLFIN_ENABLE_SLEPC, Compile with support for SLEPc.
 * DOLFIN_ENABLE_TRILINOS, Compile with support for Trilinos.
 * DOLFIN_ENABLE_UMFPACK, Compile with support for UMFPACK.
 * DOLFIN_ENABLE_CHOLMOD, Compile with support for CHOLMOD.
 * DOLFIN_ENABLE_SCOTCH, Compile with support for SCOTCH.
 * DOLFIN_ENABLE_PARMETIS, Compile with support for ParMETIS.
 * DOLFIN_ENABLE_SUNDIALS, Compile with support for SUNDIALS.
 * DOLFIN_ENABLE_ZLIB, Compile with support for zlib.
 * DOLFIN_ENABLE_HDF5, Compile with support for HDF5.

-- The following OPTIONAL packages have been found:

 * MPI, Message Passing Interface (MPI)
   Enables DOLFIN to run in parallel with MPI
 * PETSc (required version >= 3.7), Portable, Extensible Toolkit for Scientific Computation, <https://www.mcs.anl.gov/petsc/>
   Enables the PETSc linear algebra backend
 * SLEPc (required version >= 3.7), Scalable Library for Eigenvalue Problem Computations, <http://slepc.upv.es/>
 * SCOTCH, Programs and libraries for graph, mesh and hypergraph partitioning, <https://www.labri.fr/perso/pelegrin/scotch>
   Enables parallel graph partitioning
 * BLAS, Basic Linear Algebra Subprograms, <http://netlib.org/blas/>
 * Threads
 * HDF5, Hierarchical Data Format 5 (HDF5), <https://www.hdfgroup.org/HDF5>
 * ZLIB, Compression library, <http://www.zlib.net>

-- The following REQUIRED packages have been found:

 * Boost, Boost C++ libraries, <http://www.boost.org>
 * Eigen3 (required version >= 3.2.90), Lightweight C++ template library for linear algebra, <http://eigen.tuxfamily.org>
 * PythonInterp (required version >= 3), Interactive high-level object-oriented language, <http://www.python.org>
 * PkgConfig

-- The following features have been disabled:

 * CMAKE_USE_RELATIVE_PATHS, Use relative paths in makefiles and projects.
 * DOLFIN_ENABLE_CODE_COVERAGE, Enable code coverage.
 * DOLFIN_ENABLE_BENCHMARKS, Enable benchmark programs.
 * DOLFIN_SKIP_BUILD_TESTS, Skip build tests for testing usability of dependency packages.
 * DOLFIN_DEPRECATION_ERROR, Turn deprecation warnings into errors.
 * DOLFIN_ENABLE_GEOMETRY_DEBUGGING, Enable geometry debugging.

-- The following OPTIONAL packages have not been found:

 * SUNDIALS (required version >= 3), SUite of Nonlinear and DIfferential/ALgebraic Equation Solvers, <http://computation.llnl.gov/projects/sundials>
   Provides robust time integrators and nonlinear solvers that can easily be incorporated into existing simulation codes.
 * UMFPACK, Sparse LU factorization library, <http://faculty.cse.tamu.edu/davis/suitesparse.html>
 * CHOLMOD, Sparse Cholesky factorization library for sparse matrices, <http://faculty.cse.tamu.edu/davis/suitesparse.html>

-- The following REQUIRED packages have not been found:

 * UFC (required version >= 2019.2), Unified language for form-compilers (part of FFC), <https://bitbucket.org/fenics-project/ffc>

-- 
-- Generating demo source files from reStructuredText
-- --------------------------------------------------
extract written to /src/dolfin/demo/documented/auto-adaptive-poisson/cpp/AdaptivePoisson.ufl
extract written to /src/dolfin/demo/documented/auto-adaptive-poisson/cpp/main.cpp
extract written to /src/dolfin/demo/documented/biharmonic/cpp/Biharmonic.ufl
extract written to /src/dolfin/demo/documented/biharmonic/cpp/main.cpp
extract written to /src/dolfin/demo/documented/built-in-meshes/cpp/main.cpp
extract written to /src/dolfin/demo/documented/eigenvalue/cpp/StiffnessMatrix.ufl
extract written to /src/dolfin/demo/documented/eigenvalue/cpp/main.cpp
extract written to /src/dolfin/demo/documented/hyperelasticity/cpp/HyperElasticity.ufl
extract written to /src/dolfin/demo/documented/hyperelasticity/cpp/main.cpp
extract written to /src/dolfin/demo/documented/mixed-poisson/cpp/MixedPoisson.ufl
extract written to /src/dolfin/demo/documented/mixed-poisson/cpp/main.cpp
extract written to /src/dolfin/demo/documented/nonmatching-interpolation/cpp/P1.ufl
extract written to /src/dolfin/demo/documented/nonmatching-interpolation/cpp/P3.ufl
extract written to /src/dolfin/demo/documented/nonmatching-interpolation/cpp/main.cpp
extract written to /src/dolfin/demo/documented/poisson/cpp/Poisson.ufl
extract written to /src/dolfin/demo/documented/poisson/cpp/main.cpp
-- 
-- Generating form files in demo, test and bench directories. May take some time...
-- ----------------------------------------------------------------------------------------
CMake Error at CMakeLists.txt:626 (message):
  Generation of form files failed:

  Traceback (most recent call last):

    File "/src/dolfin/cmake/scripts/generate-form-files.py", line 22, in <module>
      import ffc
    File "/usr/local/lib/python3.8/dist-packages/ffc/__init__.py", line 24, in <module>
      from ffc.compiler import compile_form, compile_element
    File "/usr/local/lib/python3.8/dist-packages/ffc/compiler.py", line 129, in <module>
      from ffc.codegeneration import generate_code
    File "/usr/local/lib/python3.8/dist-packages/ffc/codegeneration.py", line 37, in <module>
      import ffc.uflacs.language.cnodes as L
    File "/usr/local/lib/python3.8/dist-packages/ffc/uflacs/__init__.py", line 23, in <module>
      from ffc.uflacs.uflacsrepresentation import compute_integral_ir
    File "/usr/local/lib/python3.8/dist-packages/ffc/uflacs/uflacsrepresentation.py", line 26, in <module>
      from ffc.representationutils import initialize_integral_ir
    File "/usr/local/lib/python3.8/dist-packages/ffc/representationutils.py", line 28, in <module>
      from ufl.cell import cellname2facetname

  ImportError: cannot import name 'cellname2facetname' from 'ufl.cell'
  (/usr/local/lib/python3.8/dist-packages/ufl/cell.py)



-- Configuring incomplete, errors occurred!
See also "/src/dolfin/build/CMakeFiles/CMakeOutput.log".
See also "/src/dolfin/build/CMakeFiles/CMakeError.log".

Dear @dparsons

thank you for the clarification! Now I understand why the number of threads is set to 1 by default. Perhaps I could just locally re-build these images and set OPENBLAS_NUM_THREADS and OMP_NUM_THREADS to the desired value?

You can check out the 2021.1.0 tag of UFL: GitHub - FEniCS/ufl at 2021.1.0
as it is the last released version compatible with legacy dolfin.
The development version of DOLFIN can be found at: GitHub - FEniCS/dolfinx: Next generation FEniCS problem solving environment
and has a demo interfacing with scipy:
dolfinx/demo_types.py at f0a5a53d548ea91989bbe1b88eb9b4edef24039e · FEniCS/dolfinx · GitHub

It was sufficient to set the environment variable OPENBLAS_NUM_THREADS to the desired number of threads. Apologies for the spam.