Problem with trying to get dolfin, meshio and GMSH python interface to coexist

Hello all,
I am trying to set up the Fenics solver inside of aSciPy optimization loop that also contains lines from the GMSH python interface and the Meshio conversion package. All this is to run on a cluster using MPI. The GMSH and Meshio part runs on one node (using a conditional statement that tests the MPI rank) - see code below. Getting all these packages to coexist is a bit of a challenge.

The code below runs without problems.

import sys
import numpy as np
from scipy import optimize as opt
from mpi4py import MPI as nMPI
import gmsh
import meshio

def Cost(x):
#Set up processor rank for job control
   comm = nMPI.COMM_WORLD
   mpiRank = comm.Get_rank()
   h = x[0]
   w = x[1]
   print("Antenna height= {0:<f}, Antenna distance from short= {1:<f}".format(h, w))
   if mpiRank == 0:
       gmsh.option.setNumber('General.Terminal', 1)

       gmsh.model.occ.addBox(0.0, 0.0, 0.0, a, b, l, 1) # WG shield
       gmsh.model.occ.addCylinder(a/2, -lc, w, 0.0, lc, 0.0, rc, 2) # Coax shield
       gmsh.model.occ.addCylinder(a/2, -lc, w, 0.0, lc+h, 0.0, r, 3) # Coax center cond & probe

       gmsh.model.occ.cut([(3,1)],[(3,3)], 4, removeObject=True, removeTool=False)
       gmsh.model.occ.cut([(3,2)],[(3,3)], 5, removeObject=True, removeTool=True)

       gmsh.option.setNumber('Mesh.MeshSizeMin', ls)
       gmsh.option.setNumber('Mesh.MeshSizeMax', lm)
       gmsh.option.setNumber('Mesh.Algorithm', 6)
       gmsh.option.setNumber('Mesh.MshFileVersion', 2.2)
       gmsh.option.setNumber('Mesh.Format', 1)
       #gmsh.option.setNumber('Mesh.Smoothing', 100)
       gmsh.option.setNumber('Mesh.MinimumCirclePoints', 36)
       gmsh.option.setNumber('Mesh.CharacteristicLengthFromCurvature', 1)

       gmsh.model.occ.fragment([(3,4),(3,5)],[], -1)
       gmsh.model.addPhysicalGroup(3, [4], 1)
       gmsh.model.addPhysicalGroup(3, [5], 2)
       gmsh.model.setPhysicalName(3, 1, "Air")
       gmsh.model.setPhysicalName(3, 2, "Diel")
       pt = gmsh.model.getEntities(0)
       gmsh.model.mesh.setSize(pt, lm)
       ov = gmsh.model.getEntitiesInBoundingBox(a/2-2*r-eps, h-eps, w-2*r-eps, a/2+2*r+eps, h+eps,    w+2*r+eps)
       gmsh.model.mesh.setSize(ov, ls)
       sv = gmsh.model.getEntitiesInBoundingBox(a/2-rc-eps, -eps, w-rc-eps, a/2+rc+eps, eps, w+rc+eps)
       gmsh.model.mesh.setSize(sv, ls)
       bv = gmsh.model.getEntitiesInBoundingBox(a/2-rc-eps, -lc-eps, w-rc-eps, a/2+rc+eps, -lc+eps, w+rc+eps)
       gmsh.model.mesh.setSize(bv, lc)


       msh ="CoaxWG.msh")
       for cell in msh.cells:
           if  cell.type == "tetra":
               tetra_cells =

       for key in msh.cell_data_dict["gmsh:physical"].keys():
           if key == "tetra":
               tetra_data = msh.cell_data_dict["gmsh:physical"][key]

       tetra_mesh = meshio.Mesh(points=msh.points, cells={"tetra": tetra_cells},

       meshio.write("mesh.xdmf", tetra_mesh)
   return 0
#Set up processor rank for job control

lc = 0.05
lm = 0.2
ls = 0.02
a = 2.286  # WG width
b = 1.016  # WG height
l = 4.0    # WG length
h1 = 0.648+0.14  # Starting probe height
r = 0.065  # probe radius
w1 = 0.554-0.25  # Starting probe distance from short
rc = 0.22  # Coax outer rad
lc = 1.0   # Coax length
eps = 1.0e-3

#Set up optimization
bounds = opt.Bounds([h1-0.3, w1-0.4], [h1+0.3, w1+0.4]) # The trial range
x0 = np.array([h1, w1]) # Starting position deviation
#res = opt.minimize(Cost, x0, method='trust-const', options={'verbose': 1}, bounds=bounds)

When it exits, the converted mesh files are present.

When I add the Dolfin import line in the import block at the beginning,

import sys
import numpy as np
from scipy import optimize as opt
from mpi4py import MPI as nMPI
import gmsh
import meshio
import dolfin

I get the following error message

Warning! ***HDF5 library version mismatched error***
The HDF5 header files used to compile this application do not match
the version used by the HDF5 library to which this application is linked.
Data corruption or segmentation faults may occur if the application continues.
This can happen when an application was compiled by one version of HDF5 but
linked with a different version of static or shared HDF5 library.
You should recompile the application or check your shared library related
settings such as 'LD_LIBRARY_PATH'.
You can, at your own risk, disable this warning by setting the environment
variable 'HDF5_DISABLE_VERSION_CHECK' to a value of '1'.
Setting it to 2 or higher will suppress the warning messages totally.
Headers are 1.12.0, library is 1.10.0

General Information:
               HDF5 Version: 1.10.0-patch1
              Configured on: Sun Aug 13 23:12:57 UTC 2017
              Configured by: buildd@lgw01-21
                Host system: x86_64-pc-linux-gnu
             Uname information: Linux lgw01-21 4.4.0-91-generic #114-Ubuntu SMP Tue Aug 8 11:56:56       UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
                   Byte sex: little-endian
         Installation point: /usr
	    Flavor name: openmpi


Where is this other version of H5PY coming from? Inside Dolfin? (I am using the 2019.1.0 version of Fenics on Ubuntu 18.04 cluster.) How can I avoid this error?

Dolfin uses HDF5 and has its own version installed.
See: Gmsh 4.4.1 in FEniCS? Meshio - #9 by SantiagoOrtiz
for a proposed solution.

Your warning message says you have headers from HDF5 1.12.0 installed. hdf5 1.12 is not available on Ubuntu 18.04. You’ve got library version contamination from some outside source external to the Ubuntu packaging system.

Indeed, it turned out that using pip to install h5py loaded the latest version. Uninstalling the pip version and using sudo apt install python3-h5py loaded the correct version for Ubuntu 18.04. Now dolfin does not kick out any incompatibility errors. Thanks!


Here is the result of everything working together, for those people interested in using Fenics inside of an optimization loop. Cheers!


Great to hear you got it working well.

Note that later releases of Debian and Ubuntu have configured h5py with MPI support (mainly for file i/o). There was demand to retain the serial build, so both are available. By default the python3-h5py package pulls in python3-h5py-serial, so when it becomes available on your system you might want to consider installing python3-h5py-mpi.

The two h5py builds are coinstallable. Python initialisation is configured to load either the serial or mpi version depending on whether the python process is running under MPI or not.

Great tip. Spent a long time searching for a solution and this worked like a charm. Hope this tip gets highlighted.