Error loading mesh in ".h5" format in parallel

Hi,
I am getting the following error when calling HDF5File() function when running in parallel:

*** -------------------------------------------------------------------------
*** Error:   Unable to open HDF5 file.
*** Reason:  HDF5 has not been compiled with support for MPI.
*** Where:   This error was encountered inside HDF5File.cpp.
*** Process: 1
*** 
*** DOLFIN version: 2019.1.0
*** Git changeset:  74d7efe1e84d65e9433fd96c50f1d278fa3e3f3f
*** -------------------------------------------------------------------------

FEniCS is compiled from source on a cluster. The FEniCS version is 2019.1. The HDF5 library version is 1.10.1 and is provided by the cluster. The cmake output message when compiling DOLFIN indicates that the HDF5 library is detected but it says nothing about MPI support:

-- The following OPTIONAL packages have been found:

 * MPI, Message Passing Interface (MPI)
   Enables DOLFIN to run in parallel with MPI
 * PETSc (required version >= 3.7), Portable, Extensible Toolkit for Scientific Computation, <https://www.mcs.anl.gov/petsc/>
   Enables the PETSc linear algebra backend
 * SLEPc (required version >= 3.7), Scalable Library for Eigenvalue Problem Computations, <http://slepc.upv.es/>
 * UMFPACK, Sparse LU factorization library, <http://faculty.cse.tamu.edu/davis/suitesparse.html>
 * BLAS, Basic Linear Algebra Subprograms, <http://netlib.org/blas/>
 * Threads
 * CHOLMOD, Sparse Cholesky factorization library for sparse matrices, <http://faculty.cse.tamu.edu/davis/suitesparse.html>
 * HDF5, Hierarchical Data Format 5 (HDF5), <https://www.hdfgroup.org/HDF5>
 * ZLIB, Compression library, <http://www.zlib.net>

I am wondering if anyone has any insight on how to go about fixing this issue.
Thank you very much for your time.

I’ve never had much luck trying to build FEniCS from source; it’s hard to get all the dependencies to work together. The easiest way I’ve found of running FEniCS on clusters is to use containerization, if it’s available. If your cluster supports Singularity, you can convert the Docker containers into Singularity images, and avoid building the software yourself.

HDF5 can be built with or without MPI parallelisation. Often systems will have both available. You need to make sure you’re building against the MPI version, e.g.
cmake -DDOLFIN_ENABLE_HDF5:BOOL=ON -DHDF5_C_COMPILER_EXECUTABLE:FILEPATH=/usr/bin/h5pcc ..

i.e. specify h5pcc not h5cc to get the parallel version.

Thank you very much!