I/O from XDMF/HDF5 files in dolfin-x

There exists no libpetsc.so file inside the /usr/lib/x86_64-linux-gnu/ of my container. I ran @dokken’s commands without any errors though, with pip throwing twice Requirement already satisfied.

Ok. If it’s respecting HDF5_MPI="ON" then hopefully it will help you.

For anyone out there looking for a may to save and write files in parallel I use PetscBinaryIO now. Syntax goes like :

import PetscBinaryIO
from mpi4py.MPI import COMM_WORLD as comm

io = PetscBinaryIO.PetscBinaryIO(complexscalars=True)
solnAsPetscBiIOVec = function.vector.array_w.view(PetscBinaryIO.Vec)
io.writeBinaryFile(f"function_proc{comm.rank}of{comm.size}-C.dat",
						[solnAsPetscBiIOVec])

function.vector[...] = io.readBinaryFile(f"file_proc{comm.rank}of{comm.size}-C.dat")

Be aware that in docker you need to run the command export PYTHONPATH=/usr/local/petsc/lib/petsc/bin/:$PYTHONPATH first or the above code will fail.

Obviously the files produced are discretisation and proc-dependant. In my experience though stuff goes well as long as every program reads them once.

3 Likes

Ive started developing checkpointing using ADIOS2 at: GitHub - jorgensd/adios4dolfinx: Interface of ADIOS2 for DOLFINx

It currently works in serial and parallel with Lagrange/DG elements.

There is still some more work required to get elements such as N1Curl and RT to work in parallel (They currently work in serial).

3 Likes

Is this available in the anaconda version?

The library I am referring to above is built on top of DOLFINx.
If you have installed DOLFINx with conda, you should install pip inside the conda environment.
You can call

conda activate name-of-your-dolfinx-env
git clone https://github.com/jorgensd/adios4dolfinx.git
cd adios4dolfinx
git checkout v0.1.0
python -m pip install .

An update on:

This is now resolved in the newest release (0.2.1) which works with the main branch of DOLFINx (not compatible with conda).

1 Like

Great, thank you. Now I have adios4dolfinx installed, but I can’t create a working example. Following https://github.com/jorgensd/adios4dolfinx/blob/main/tests/test_checkpointing.py#L31, this is what I have;

from dolfinx import mesh, fem
import adios4dolfinx
from mpi4py import MPI
from petsc4py import PETSc

# setup test
mesh_comm = MPI.COMM_WORLD
domain = mesh.create_unit_square(mesh_comm, 5,5, cell_type=dolfinx.mesh.CellType.triangle)
Vin = fem.FunctionSpace(domain, ("CG",1))

filename = 'test.bp'

# output
uin = fem.Function(Vin)
uin.x.array[:] = 2
adios4dolfinx.write_mesh(domain, filename)
adios4dolfinx.write_function(uin, filename)

# import
engine = "BP4"
mesh = adios4dolfinx.read_mesh(mesh_comm, filename, engine, mesh.GhostMode.shared_facet)
Vout = fem.FunctionSpace(mesh, ("CG",1))
uout = fem.Function(Vout)
adios4dolfinx.read_function(uout, filename, engine)

and I get the error message

---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
Cell In[10], line 21
     19 # import
     20 engine = "BP4"
---> 21 mesh = adios4dolfinx.read_mesh(mesh_comm, filename, engine, mesh.GhostMode.shared_facet)
     22 Vout = fem.FunctionSpace(mesh, ("CG",1))
     23 uout = fem.Function(Vout)

File /usr/local/anaconda3/envs/fenicsx-env/lib/python3.10/site-packages/adios4dolfinx/checkpointing.py:104, in read_mesh(comm, file, engine, ghost_mode)
    102 # Get mesh cell type
    103 if "CellType" not in io.AvailableAttributes().keys():
--> 104     raise KeyError("Mesh cell type not found at CellType")
    105 celltype = io.InquireAttribute("CellType")
    106 cell_type = celltype.DataString()[0]

KeyError: 'Mesh cell type not found at CellType'

I mostly copied from the test, any idea what I’m doing wrong?

In v0.1.0, you have to write the mesh and function to separate files.

I Fixed this in v0.2.0 (but i guess you cant use it if you are using conda).
See:

For the syntax for 0.1.0

1 Like

I’ve been playing around with adios4dolfinx, and firstly could I confirm that numba is only really used to decrease computational time? Numba is a real pain to install so I just gave up and commented the lines containing it.

And indeed functions that were written by m processors and read by n processors seems to be fine without numba but only if I uses finite elements based on a mesh that uses adios4dolfinx.read_mesh. It seems like adios2 partitions the mesh completely differently from using XDMF to read/write mesh/meshtags that is then used for XDMF’s write function or using petscbinaryio as in an earlier response. That’s fine for simple geometries, but I have a complex geometry with meshtags (and facet mesh tags) and Measure() objects based on those mesh tags. Is there a way to recover mesh tags with adios4dolfinx or perhaps a way to map Functions from adios4dolfinx’s mesh onto that of the XDMF mesh?

Yes, it is to speed up the code.

Adios2 does no partitioning of the mesh. It uses how DOLFINx has partitioned the mesh (with scotch,parmetis,kahip) to write out data.

It is a fair question, and I will work on this over easter. I’ve added an issue: Add function to write `MeshTags` · Issue #9 · jorgensd/adios4dolfinx · GitHub to remind me and a better place to keep track of progress.

That’s interesting because the results of the partition seems completely different with adios4dolfinx. I wrote a quick MWE to illustrate what I mean

from mpi4py import MPI
import dolfinx
from dolfinx.fem import (Function, FunctionSpace, Expression)
from dolfinx.io import XDMFFile
import ufl
from petsc4py import PETSc
import adios4dolfinx

comm = MPI.COMM_WORLD
mesh = dolfinx.mesh.create_unit_square(comm, 100,100, cell_type=dolfinx.mesh.CellType.triangle)
filename = f"./{comm.size}proc.bp"

### write ###
v_cg2 = ufl.VectorElement("CG", mesh.ufl_cell(), 2)
V = FunctionSpace(mesh, v_cg2)
u = Function(V)
x = ufl.SpatialCoordinate(mesh)
dfnx_expr = Expression(x, V.element.interpolation_points())
u.interpolate(dfnx_expr)
adios4dolfinx.write_mesh(mesh, "./mesh/square.bp")
adios4dolfinx.write_function(u, filename)

### read ###
mesh_new = adios4dolfinx.read_mesh(comm, "./mesh/square.bp", "BP4", dolfinx.mesh.GhostMode.shared_facet)
v_new = ufl.VectorElement("CG", mesh_new.ufl_cell(), 2)
W = FunctionSpace(mesh_new, v_new)
w = Function(W)
v = Function(V)
adios4dolfinx.read_function(w, filename)
adios4dolfinx.read_function(v, filename)
#v.x.array[:] = o.x.array # array sizes not compatible?
with XDMFFile(comm, f"./0_{comm.size}proc.xdmf", "w") as xdmf:
    xdmf.write_mesh(mesh_new)
    xdmf.write_function(w)
with XDMFFile(comm, f"./1_{comm.size}proc.xdmf", "w") as xdmf:
    xdmf.write_mesh(mesh)
    xdmf.write_function(u)
with XDMFFile(comm, f"./2_{comm.size}proc.xdmf", "w") as xdmf:
    xdmf.write_mesh(mesh)
    xdmf.write_function(w)
with XDMFFile(comm, f"./3_{comm.size}proc.xdmf", "w") as xdmf:
    xdmf.write_mesh(mesh_new)
    xdmf.write_function(u)
with XDMFFile(comm, f"./4_{comm.size}proc.xdmf", "w") as xdmf:
    xdmf.write_mesh(mesh)
    xdmf.write_function(v)

Here the first two (0 & 1) xdmf files work just fine if I run them at any number of processors (or in fact split the code in two and run the write part with m procs and read part with n procs, as intended, which is great).

However any kind of mixing and matching (with XDMF files 2-4) doesn’t work even if run as a singular code with n processors, and the results look like it’s due to weird partitioning differences (corroborated by the fact that o.x.array is not the same size as v.x.array). I would like to do this as I have mesh tags for mesh but not mesh_new (in my non-MWE), and I also use multiphenicsx’s dof restrictions. Why does dolfinx partition the same mesh two different ways just because one way uses adios2? If there’s no particular reason, can this be changed?

Below is an example of the results of a mixed XDMF file

So the issue here is that you are making certain assumptions when you use a built in mesh.

A “built in” mesh is created as any other mesh, as set of points and connectivities as two separate arrays

These cells and points are sent to a mesh partitioner of choice (default is SCOTCH), which partitions the mesh. The mesh keeps track of its input index (under mesh.topology.original_cell_index).
These are in turn used to read in MeshTags of various dimensions.

However, when a mesh is written to file, we do not revert it to its original indices. It is then written in a continuous fashion, as this is the most efficient.

This means that whenever you read in that mesh from file, the cells are shuffled before they are sent to the mesh partitioner.

This means that there is no direct mapping between the mesh generated with create_unit_square and the one read from file.

You can see that xdmf and adios2 treats the mesh similarly, by calling:


from mpi4py import MPI
import dolfinx
from dolfinx.fem import (Function, FunctionSpace, Expression)
from dolfinx.io import XDMFFile
import ufl
from petsc4py import PETSc
import adios4dolfinx

comm = MPI.COMM_WORLD
mesh = dolfinx.mesh.create_unit_square(comm, 10, 10, cell_type=dolfinx.mesh.CellType.triangle)
filename = f"./{comm.size}proc.bp"

### write ###
v_cg2 = ufl.VectorElement("Lagrange", mesh.ufl_cell(), 2)
V = FunctionSpace(mesh, v_cg2)
u = Function(V)
x = ufl.SpatialCoordinate(mesh)
dfnx_expr = Expression(x, V.element.interpolation_points())
u.interpolate(dfnx_expr)
adios4dolfinx.write_mesh(mesh, "./mesh/square.bp")
adios4dolfinx.write_function(u, filename)


### write mesh ###
with dolfinx.io.XDMFFile(comm, "mesh.xdmf", "w") as xdmf:
    xdmf.write_mesh(mesh)
    top = xdmf.read_topology_data()
### read ###

with dolfinx.io.XDMFFile(comm, "mesh.xdmf", "r") as xdmf:
    mesh_xdmf = xdmf.read_mesh()

mesh_new = adios4dolfinx.read_mesh(comm, "./mesh/square.bp", "BP4", dolfinx.mesh.GhostMode.none)


print(comm.rank,
      "mesh input index", mesh.topology.original_cell_index, "\n",
      mesh_new.topology.original_cell_index, "\n",
      "\n",
      mesh_xdmf.topology.original_cell_index)


v_new = ufl.VectorElement("Lagrange", mesh_new.ufl_cell(), 2)


W = FunctionSpace(mesh_new, v_new)
w = Function(W)
adios4dolfinx.read_function(w, filename)
#adios4dolfinx.read_function(v, filename)
with XDMFFile(comm, f"./0_{comm.size}proc.xdmf", "w") as xdmf:
    xdmf.write_mesh(mesh_new)
    xdmf.write_function(w)
with XDMFFile(comm, f"./1_{comm.size}proc.xdmf", "w") as xdmf:
    xdmf.write_mesh(mesh)
    xdmf.write_function(u)
with XDMFFile(comm, f"./2_{comm.size}proc.xdmf", "w") as xdmf:
    xdmf.write_mesh(mesh)
    xdmf.write_function(w)

with XDMFFile(comm, f"./3_{comm.size}proc.xdmf", "w") as xdmf:
    xdmf.write_mesh(mesh_xdmf)
    xdmf.write_function(w)

Here, the third example (the mesh read from XDMF) gives the correct output.

2 Likes

Hello,
I’ve been following the developments here and was playing around with the new adios2 interface in latest nightly dolfinx container.
I would expect setting a function, writing it, and reading it back in again should yield the same, but this MWE is failing in my case (only tested in serial):

#!/usr/bin/env python3
from mpi4py import MPI
import numpy as np
import os
from dolfinx import fem, mesh
import adios4dolfinx

comm = MPI.COMM_WORLD

msh = mesh.create_box(comm, [np.array([0.0, 0.0, 0.0]),np.array([2.0, 1.0, 1.0])], [2, 2, 2],
               mesh.CellType.hexahedron, mesh.GhostMode.none)

V = fem.VectorFunctionSpace(msh, ("CG", 2))
f = fem.Function(V, name="fiber")

# some random numbers
f.vector[:] = np.random.rand(f.vector.getSize())

print(f.vector[:])
    
# seems that we need to delete old folder, otherwise adios will not overwrite
os.system('rm -rf tmpfib.bp')
    
# write function using adios4dolfinx
adios4dolfinx.write_function(f, 'tmpfib.bp')

# read function using adios4dolfinx
fnew = fem.Function(V, name="fiber_new")
adios4dolfinx.read_function(fnew, 'tmpfib.bp')

print(fnew.vector[:])

arr = f.vector.array - fnew.vector.array
print(arr)

# should be the same
assert(np.allclose(f.vector.array, fnew.vector.array))

I would expect the array arr to be zero, hence the assert statement to be True. Interestingly, it is True if I reduce the mesh, e.g. from [2, 2, 2] to [1, 1, 1].

Do I miss something here?

Thanks!

See my comments in the previous posts.
If you simply want to read and write the function data in the same script, you could use something alot more lightweight (as then the number of processes are the same, all the data is local). I would use either the adios2 python interface or just h5py to read and write local chunks of the u.x.array to and from file.

Ok thanks for the reply. Well this was just an MWE. In my application, I have a preprocessing step (running in serial) that writes a field (e.g. a POD mode, fiber field, …). Then, in my simulation, running in parallel, I want to make use of this data and need to read it in somehow.
The solution in my post I/O from XDMF/HDF5 files in dolfin-x - #12 by marchirschvogel worked so far for me and is still working, but suffers when the mesh becomes very large (>= 1M DOFS), since it searches for the coordinates to match.

So I’m looking for a fast and efficient way of parallel read-in of previously (serially) written data.

For an application where the mesh is read from the checkpoint file along with the function adios4dolfinx works. This is what it seems like your actual workflow is.

It does not work as a checkpointing (write/read in again to same mesh) for a mesh during simulation workflow.

Ok I see, thanks. So basically if I wanted to read any field I would skip the XDMFFile read mesh routine and instead use the mesh written by adios4dolfinx. But then I wonder how to retrieve the meshtags (I see the issue has been raised by @hermanmak in a previous post…).

Yes, you would use the read mesh from adios4dolfin.
For the meshtags; I’ve simply not had bandwidth to work on this yet, it shouldn’t be too complicated to do (if you want to give it a go).

1 Like

I want to share what is now working much faster for me than my old approach via the coordinates.
In preprocessing (serial), read in your mesh with the standard XDMF routine, set values to a function, and write a text file using
igi = msh.geometry.input_global_indices

with the following structure

igi1 fx1 fy1 fz1
igi2 fx2 fy2 fz2
...
ign fxn fyn fzn

so the input node index followed by the x,y,z values of your vector field (or the one value of a scalar field).

Then, in parallel, use

def readfunction(f, V, datafile, msh):

    # block size of vector
    bs = f.vector.getBlockSize()

    # load data and input node indices
    data = np.loadtxt(datafile,usecols=(np.arange(1,bs+1)),ndmin=2)
    ind_file = np.loadtxt(datafile,usecols=(0),dtype=int)

    # index map and input indices
    im = np.asarray(V.dofmap.index_map.local_to_global(np.arange(V.dofmap.index_map.size_local + V.dofmap.index_map.num_ghosts, dtype=np.int32)), dtype=PETSc.IntType)
    igi = msh.geometry.input_global_indices

    # since in parallel, the ordering of the dof ids might change, so we have to find the
    # mapping between original and new id via the coordinates
    ci = 0
    for i in im:

        ind = np.where(ind_file == igi[ci])[0]

        # only write if we've found the index
        if len(ind):
            for j in range(bs):
                f.vector[bs*i+j] = data[ind[0],j]

        ci+=1

    f.vector.assemble()
    f.vector.ghostUpdate(addv=PETSc.InsertMode.INSERT, mode=PETSc.ScatterMode.FORWARD)
1 Like

Hello, I am trying to use adios4dolfinx. I installed FEniCSx with conda, and to install adios I just wrote (in the same environment): conda -c install conda-forge adios2 numba, and then the command pip install git+https://github.com/jorgensd/adios4dolfinx@v0.1.0 works fine.

So I have a function of mixed elements:

velocity_element = VectorElement("Lagrange", mesh.ufl_cell(), 2)
pressure_element = FiniteElement("Lagrange", mesh.ufl_cell(), 1)
W = FunctionSpace(mesh, MixedElement([velocity_element, pressure_element]))

then I start a simulation and get a solution w = Function(W).

Then I just want to test adios4dolfinx and write:

import adios4dolfinx
u, p = w.split()
adios4dolfinx.write_function(u, "my_file")
adios4dolfinx.read_function(u, "my_file")

I get only a warning for the write_function call, but get this error for the read_function call:

WARNING:py.warnings:/home/remi/anaconda3/envs/FEniCSx_env/lib/python3.10/site-packages/adios4dolfinx/utils.py:104: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
  def find_first(b: int, a: npt.NDArray[np.int32]):

Traceback (most recent call last):
  File "/home/remi/Bureau/Python/PhD_envs/FEniCSx_validation/launch_simulation.py", line 104, in <module>
    adios4dolfinx.read_function(u, "wola")
  File "/home/remi/anaconda3/envs/FEniCSx_env/lib/python3.10/site-packages/adios4dolfinx/checkpointing.py", line 229, in read_function
    input_cells = mesh.topology.original_cell_index[local_cells]
IndexError: index 4417 is out of bounds for axis 0 with size 4396

What am I missing ?

A minimum working example is written just below, where the mesh is created, and then a solution is initialized but nothing is calculated. I use the command mpirun -n 8 python mwe.py to launch it.

############################################
#------------ Mesh Creation ---------------#
############################################
import gmsh
import time
import sys
import numpy as np

r = 0.05
c_x = 0.2
c_y=0.2
gdim = 2

from dolfinx.io import gmshio
from mpi4py import MPI
from dolfinx.fem import Function, FunctionSpace

mesh_comm = MPI.COMM_WORLD
model_rank = 0

from env_data import parameters, gdim
from solver_gym import FluidDynamics_env, printr0
from ufl import (FiniteElement,  VectorElement, MixedElement, split)





# Calculating the time that mesh generation will take
tic = time.perf_counter()

gmsh.initialize()

gmsh.model.add("cylinder")

# target mesh size for any point
lc = 0.05
lc = 0.1

if mesh_comm.rank == model_rank:

	############################################
	#---------- Geometry creation -------------#
	############################################
	## 0 dim : Points definition
	# Rectangle delimiting the flow
	gmsh.model.geo.addPoint(0, 0, 0, lc, 1)
	gmsh.model.geo.addPoint(2.2, 0, 0, lc, 2)
	gmsh.model.geo.addPoint(2.2, 0.41, 0, lc, 3)
	gmsh.model.geo.addPoint(0, 0.41, 0, lc, 4)

	# Points to create the circle for the cylinder (4 points + center)
	gmsh.model.geo.addPoint(c_x, c_y, 0, lc, 5) # center
	gmsh.model.geo.addPoint(c_x + r, c_y, 0, lc, 6)
	gmsh.model.geo.addPoint(c_x, c_y + r, 0, lc, 7)
	gmsh.model.geo.addPoint(c_x - r, c_y, 0, lc, 8)
	gmsh.model.geo.addPoint(c_x, c_y - r, 0, lc, 9)

	# Points to create the polygon (pentagon in this case) that will include the cylinder and the wake
	# In that pentagon, the mesh will be refined
	gmsh.model.geo.addPoint(c_x - 2*r, c_y, 0, lc, 10)
	gmsh.model.geo.addPoint(c_x, c_y + 2*r, 0, lc, 11)
	gmsh.model.geo.addPoint(2.2, c_y + 2*r, 0, lc, 12)
	gmsh.model.geo.addPoint(2.2, c_y - 2*r, 0, lc, 13)
	gmsh.model.geo.addPoint(c_x, c_y - 2*r, 0, lc, 14)

	## 1 dim: Lines and Curves
	# Lines for the rectangle
	gmsh.model.geo.addLine(1, 2, 1)
	gmsh.model.geo.addLine(2, 13, 2)
	gmsh.model.geo.addLine(3, 4, 3)
	gmsh.model.geo.addLine(4, 1, 4)
	gmsh.model.geo.addLine(12 , 3, 5)

	# Lines for the pentagone
	gmsh.model.geo.addLine(10, 11, 6)
	gmsh.model.geo.addLine(11, 12, 7)
	gmsh.model.geo.addLine(12, 13, 8)
	gmsh.model.geo.addLine(13, 14, 9)
	gmsh.model.geo.addLine(14, 10, 10)

	# Creation of the circle for the cylinder (4 CircleArcs joining the 4 points previously defined)
	gmsh.model.geo.addCircleArc(6, 5, 7, tag=11)
	gmsh.model.geo.addCircleArc(7, 5, 8, tag=12)
	gmsh.model.geo.addCircleArc(8, 5, 9, tag=13)
	gmsh.model.geo.addCircleArc(9, 5, 6, tag=14)


	## 2 dim
	# Creation of the plane surface: rectangle - pentagone
	gmsh.model.geo.addCurveLoop([1, 2, 9, 10, 6, 7, 5, 3, 4], 1)
	gmsh.model.geo.addCurveLoop(list(range(6, 11)), 2)
	gmsh.model.geo.addPlaneSurface([1], 1)


	############################################
	#------ Parameters & Physical groups ------#
	############################################
	# Creation of a curve for the polygon
	curved_loop_list = [2]

	# Creation of curves loops for the cylinder
	gmsh.model.geo.addCurveLoop([11, 12, 13, 14], 3)
	curved_loop_list.append(3)

	# Creation of the plane surface: polygon - cylinder
	gmsh.model.geo.addPlaneSurface(curved_loop_list, 2)

	gmsh.model.geo.synchronize()

	## Fields
	# Distance field to calculate distance from the three cylinders
	gmsh.model.mesh.field.add("Distance", 1)
	gmsh.model.mesh.field.setNumbers(1, "CurvesList", list(range(11, 15)))
	gmsh.model.mesh.field.setNumber(1, "Sampling", 100)

	# Threshold field to refine the mesh around the three cylinders
	gmsh.model.mesh.field.add("Threshold", 2)
	gmsh.model.mesh.field.setNumber(2, "InField", 1)
	gmsh.model.mesh.field.setNumber(2, "SizeMin", lc / 20)
	gmsh.model.mesh.field.setNumber(2, "SizeMax", lc)
	gmsh.model.mesh.field.setNumber(2, "DistMin", r/2)
	gmsh.model.mesh.field.setNumber(2, "DistMax", r)

	# Constant mesh size inside polygon
	gmsh.model.mesh.field.add("Constant", 3)
	gmsh.model.mesh.field.setNumber(3, "VIn", lc/5)
	gmsh.model.mesh.field.setNumber(3, "VOut", lc)
	gmsh.model.mesh.field.setNumbers(3, "SurfacesList", [2])

	# Distance field to calculate distance from the outlet
	gmsh.model.mesh.field.add("Distance", 4)
	gmsh.model.mesh.field.setNumbers(4, "CurvesList", [2, 8, 5])
	gmsh.model.mesh.field.setNumber(4, "Sampling", 100)

	# Threshold field to refine the mesh around the three cylinders
	gmsh.model.mesh.field.add("Threshold", 5)
	gmsh.model.mesh.field.setNumber(5, "InField", 4)
	gmsh.model.mesh.field.setNumber(5, "SizeMin", lc/5)
	gmsh.model.mesh.field.setNumber(5, "SizeMax", lc)
	gmsh.model.mesh.field.setNumber(5, "DistMin", r/2)
	gmsh.model.mesh.field.setNumber(5, "DistMax", r)

	# Final field as in t10.py, take the minimum mesh size for all Threshold fields.
	gmsh.model.mesh.field.add("Min", 6)
	gmsh.model.mesh.field.setNumbers(6, "FieldsList", [2, 3, 5])
	gmsh.model.mesh.field.setAsBackgroundMesh(6)

	## Create Physical Groups
	gmsh.model.addPhysicalGroup(1, [1, 3], name="walls") # walls: up and down sides of the rectangle
	gmsh.model.addPhysicalGroup(1, [4], name="inlet") # inlet: left side of the rectangle
	gmsh.model.addPhysicalGroup(1, [2, 8, 5], name="outlet") # outlet: right side of the rectangle
	gmsh.model.addPhysicalGroup(1, list(range(11, 15)), name="cylinder") # obstacel
	gmsh.model.addPhysicalGroup(2, [1, 2], name="control_volume", tag=5) # Reunion of the two plane surfaces: where fluid will flow

	gmsh.model.geo.synchronize()

	# printing how long the mesh creation took
	toc = time.perf_counter()
	print("Mesh generation took {} secondes !".format(toc-tic))


	############################################
	#----- Mesh Saving & Post-processing ------#
	############################################
	# Generate and write to file
	gmsh.model.mesh.generate(gdim)
	gmsh.write("cylinder_mesh.msh")
	gmsh.write("cylinder_mesh.vtk")






# reading mesh from file
mesh, cell_tags, facet_tags = gmshio.read_from_msh("cylinder_mesh.msh", mesh_comm, model_rank, gdim=gdim)
# creating the space on which the solution will be defined, here it is Taylor-Hood element P2-P1.
velocity_element = VectorElement("Lagrange", mesh.ufl_cell(), 2)
pressure_element = FiniteElement("Lagrange", mesh.ufl_cell(), 1)
W = FunctionSpace(mesh, MixedElement([velocity_element, pressure_element]))


# init solution vector, it is a function of a mixed function space, the first one is for velocity field
# and the second for pressure field (u on V and p on Q respectively, with W:=V*Q)
w = Function(W)
u, p = w.split()

import adios4dolfinx
adios4dolfinx.write_function(u, "my_file")
adios4dolfinx.read_function(u, "my_file")