Read OpenFoam results in dolfinx

Hello folks,

I’m trying to import the results of an OpenFoam calculation result as a dolfinx Function on a MixedSpace. I have a VTK-like file format with point values that I would like to read and use in dolfinx. I’m having a surprisingly hard time doing it.

I’ve used this ppt (p18) as a source of inspiration first. I’ve exported my OpenFoam results in the .vtk format using legacy options, then tried to open them in dolfinx. Unfortunately it seems dolfinx.io.VTKFile does not support reading yet (according to this old docs).

I’ve moved to meshio for reading the file but it seems to be unable to handle POLYDATA VTK files.

So I’ve tried reading the OpenFoam output into Paraview then convert it into .xdmf (for some reason my Paraview actually produces a .xmf and a .h5 file, specifying point values only). dolfinx.io cannot read .xmf files either (as said in this question), but meshio seems to handle it fine.

Then, I write the mesh and read it again as a dolfinx object, and attempt to directly map the values from meshio onto it, with the added complexity of node reordering and vector ordering from this old post. This also seems to work, but fails when I try to print it for a sanity check with the unexpected traceback :

Traceback (most recent call last):
  File "/home/shared/src/tset.py", line 40, in <module>
    xdmf.write_function(p)
  File "/usr/local/dolfinx-real/lib/python3.8/dist-packages/dolfinx/io.py", line 46, in write_function
    super().write_function(u_cpp, t, mesh_xpath)
RuntimeError: Newton method failed to converge for non-affine geometry

My MWE follow, inside a dolfinx container with pip3 install --no-binary=h5py h5py meshio prior execution :

import meshio, ufl
import numpy as np
import dolfinx as dfx
from dolfinx.io import XDMFFile
from mpi4py.MPI import COMM_WORLD

# Read the mesh, recreate it using meshio magic to make it dolfinx friendly
def create_mesh(mesh, cell_type):
    cells = mesh.get_cells_type(cell_type)
    out_mesh = meshio.Mesh(points=mesh.points, cells={cell_type: cells})
    return out_mesh

in_mesh = meshio.read("front3.xmf")
out_mesh = create_mesh(in_mesh, "quad")
meshio.write("mesh.xdmf", out_mesh)

with XDMFFile(COMM_WORLD, "mesh.xdmf", "r") as file:
    mesh = file.read_mesh(name="Grid")

# Handling reordering
idcs = np.argsort(mesh.geometry.input_global_indices).astype('int32')
vec_idcs = np.vstack((idcs,idcs+np.max(idcs),idcs+2*np.max(idcs)))

# Create useful dolfinx objects
FE_vector=ufl.VectorElement("Lagrange",mesh.ufl_cell(),1,3)
FE_scalar=ufl.FiniteElement("Lagrange",mesh.ufl_cell(),1)

p_Space=dfx.FunctionSpace(mesh,FE_scalar)
p = dfx.Function(p_Space)
u_Space=dfx.FunctionSpace(mesh,FE_vector)
U = dfx.Function(u_Space)

# Map one onto the other 
p.vector[    idcs] = in_mesh.point_data['p']
U.vector[vec_idcs] = in_mesh.point_data['U'].reshape(-1,order='F')

# Print to make human readable
with XDMFFile(COMM_WORLD, "sanity_check.xdmf", "w") as xdmf:
    xdmf.write_mesh(mesh)
    xdmf.write_function(p)
    xdmf.write_function(U)

My front3 files are available here.

Could be related to: pull back on quadrilateral not working if gdim = 3 · Issue #2109 · FEniCS/dolfinx · GitHub
which I will have a look at later today.
Could you print mesh.geometry.dim. I bet it is 3. If you prune the z dimension from your points in

i.e.

points=mesh.points[:,:2]

I think it should work.

1 Like

Spot on that resolves the original bug ! I should have know - such a pruning was included in the original code snippet I took inspiration from.

I get an ugly output though. I enter a relatively nice flow (front.xmf in the code above) :

And I get this weird, grainy output (sanity_check.xdmf above) :

Could you first check if u_Space.tabulate_dof_coordinates() is the same as mesh.geometry.x?
Secondly, could you clarify if it is u or p that you are looking at in your plots?
Could you first check that p is correct, and that p_space.tabulate_dof_coordinates() is the same as mesh.geometry.x?

If I test only with writing p, the plots look good. Above is u_x, which is not ok (besides u_x=u_y=u_z in output which is not intended behaviour). This is clearly a dof ordering problem. Last but not least, adding u to the file also breaks p, which becomes (partial) in Paraview and looks broken. I guess I could from now on handle all my velocities as three separate scalar functions but that sounds wrong…

My code passes both your tests, that is to say :

  • np.sum(np.isclose(u_Space.tabulate_dof_coordinates(),mesh.geometry.x))==mesh.geometry.x.size
  • np.sum(np.isclose(p_Space.tabulate_dof_coordinates(),mesh.geometry.x))==mesh.geometry.x.size

So I got something right. My current code below :

import meshio, ufl
import numpy as np
import dolfinx as dfx
from dolfinx.io import XDMFFile
from mpi4py.MPI import COMM_WORLD

# Read the mesh, recreate it using meshio magic to make it dolfinx friendly
def create_mesh(mesh, cell_type):
    cells = mesh.get_cells_type(cell_type)
    out_mesh = meshio.Mesh(points=mesh.points[:,:2], cells={cell_type: cells})
    return out_mesh

in_mesh = meshio.read("front3.xmf")
out_mesh = create_mesh(in_mesh, "quad")
meshio.write("mesh.xdmf", out_mesh)

with XDMFFile(COMM_WORLD, "mesh.xdmf", "r") as file:
    mesh = file.read_mesh(name="Grid")

# Handling reordering
idcs = np.argsort(mesh.geometry.input_global_indices).astype('int32')
vec_idcs = np.concatenate((idcs,idcs+np.max(idcs)+1,idcs+2*(np.max(idcs)+1)))

# Create useful dolfinx objects
FE_vector=ufl.VectorElement("Lagrange",mesh.ufl_cell(),1,3)
u_Space=dfx.FunctionSpace(mesh,FE_vector)
U = dfx.Function(u_Space)
FE_scalar=ufl.FiniteElement("Lagrange",mesh.ufl_cell(),1)
p_Space=dfx.FunctionSpace(mesh,FE_scalar)
p = dfx.Function(p_Space)

# Map one onto the other
U.vector[vec_idcs] = in_mesh.point_data['U'].reshape(-1,order='F')
p.vector[    idcs] = in_mesh.point_data['p']

# Print to make human readable
with XDMFFile(COMM_WORLD, "sanity_check.xdmf", "w") as xdmf:
    xdmf.write_mesh(mesh)
    xdmf.write_function(p)
    xdmf.write_function(U)

You can use Extract Block as explained in Using Paraview for visualization — FEniCSx tutorial
to get each of the fields without partial.
The input data, in_mesh_point_data["U"], is it laid out as ux_0, ux_1, ...., ux_N, uy_0, .... uy_N, uz_0, ..., uz_N?
If so, you need to change the order to ux_0, uy_0, uz_0, ux_1, ... as this is how DOLFINx stores data.

The Extract Block option is grayed for me on this output file, even though I can see the option for other files. But that’s low on the priorities list as I can always write two files and keep pressure and velocities separate.

in_mesh.point_data['U'] is a numpy.ndarray of size (N,3) so I can get it to the expected format with a simple .flatten() I think. That’s the easy part.

I obtain something that is now different for the three directions for the velocity, looking like :

Now my guess is that my vec_idcs is incorrect. I tried :

vec_idcs=np.repeat(idcs,3)
vec_idcs[1::3]+=np.max(idcs)+1
vec_idcs[2::3]+=2*(np.max(idcs)+1)

To no avail. I guess I don’t understand how to map the point indexes from my original mesh to the positions inside the dolfinx vector.

Ok @dokken I found my own mistake.

The correct way to go about this goes like :

vec_idcs = np.repeat(3*idcs,3)
vec_idcs[1::3]+=1
vec_idcs[2::3]+=2

Maybe this will be useful to someone else.

Ok last question. This works for a degree 1 FunctionSpace, but as all CFD-users know, keeping order 1 for both pressure and velocity is numerically unstable.

Is there a quick and dirty way to interpolate the 1rst degree Function U successfully obtained above onto a 2nd degree Function ?

Simply use interpolation. i.e. Lets say you have a function u and a function w in the higher order space,
run

w.interpolate(u)

as in Error control: Computing convergence rates — FEniCSx tutorial or
dolfinx/test_interpolation.py at ae6f7d7346628494489b60d4a98ca2c520798f72 · FEniCS/dolfinx · GitHub

1 Like

Many thanks ! Final process looks like :

  1. Read the .xmf file using meshio
  2. Write the mesh again thanks to meshio, pruning out the z elements
  3. Read the mesh in dolfinx
  4. Use it to create FiniteElements of order 1 and associated FunctionSpace
  5. Jiggle around the data indexing
  6. Set the Function using vector directly
  7. Interpolate the lower order Function onto the higher order one
  8. Write the Functions into a mixed FunctionSpace and save

Save 8 is required for the rest of my code, but this whole process still feels a tad cumbersome (especially steps 2, 4). Nonetheless, it works, thanks again !

import meshio, ufl
import numpy as np
import dolfinx as dfx
from dolfinx.io import XDMFFile
from mpi4py.MPI import COMM_WORLD
from petsc4py import PETSc as pet

# Step 1 : read mesh and point data
in_mesh = meshio.read("front3.xmf")
# Step 2 : write it out again
cells = mesh.get_cells_type("quad") 
out_mesh = meshio.Mesh(points=in_mesh.points[:,:2], cells={"quad": cells})
meshio.write("mesh.xdmf", out_mesh)
# Step 3 : read it again in dolfinx
with XDMFFile(COMM_WORLD, "mesh.xdmf", "r") as file:
    mesh = file.read_mesh(name="Grid")
# Step 4 : create FiniteElement, FunctionSpace & Functions
FE_vector_1=ufl.VectorElement("Lagrange",mesh.ufl_cell(),1,3)
FE_vector_2=ufl.VectorElement("Lagrange",mesh.ufl_cell(),2,3)
FE_scalar  =ufl.FiniteElement("Lagrange",mesh.ufl_cell(),1)
V_Space=dfx.FunctionSpace(mesh,FE_vector_1)
U_Space=dfx.FunctionSpace(mesh,FE_vector_2)
p_Space=dfx.FunctionSpace(mesh,FE_scalar)
Space  =dfx.FunctionSpace(mesh,FE_vector_2*FE_scalar)
V = dfx.Function(V_Space)
U = dfx.Function(U_Space)
p = dfx.Function(p_Space)
# Step 5 : jiggle indexing
idcs = np.argsort(mesh.geometry.input_global_indices).astype('int32')
vec_idcs = np.repeat(3*idcs,3)
vec_idcs[1::3]+=1
vec_idcs[2::3]+=2
# Step 6 : map one onto the other
V.vector[vec_idcs] = in_mesh.point_data['U'].flatten()
p.vector[idcs] = in_mesh.point_data['p']
# Step 7: interpolation
U.interpolate(V)
# Step 8 : write result as mixed
q = dfx.Function(Space)
U_b,p_b = ufl.split(q)
U_b,p_b=U,p
viewer = pet.Viewer().createMPIIO("../cases/nozzle/baseflow/dat_complex/baseflow_S=0.000.dat", 'w', COMM_WORLD)
q.vector.view(viewer)

Quick update, my naive assignement to a mixed element was wrong. Building on this post I can now present a working code :

import meshio, ufl
import numpy as np
import dolfinx as dfx
from dolfinx.io import XDMFFile
from mpi4py.MPI import COMM_WORLD
from petsc4py import PETSc as pet

# Step 1 : read mesh and point data
in_mesh = meshio.read("input.xmf")
# Step 2 : write it out again
cells = mesh.get_cells_type("quad") 
out_mesh = meshio.Mesh(points=in_mesh.points[:,:2], cells={"quad": cells})
meshio.write("mesh.xdmf", out_mesh)
# Step 3 : read it again in dolfinx
with XDMFFile(COMM_WORLD, "mesh.xdmf", "r") as file:
    mesh = file.read_mesh(name="Grid")
# Step 4 : create FiniteElement, FunctionSpace & Functions
FE_vector_1=ufl.VectorElement("Lagrange",mesh.ufl_cell(),1,3)
FE_vector_2=ufl.VectorElement("Lagrange",mesh.ufl_cell(),2,3)
FE_scalar  =ufl.FiniteElement("Lagrange",mesh.ufl_cell(),1)
V_Space=dfx.FunctionSpace(mesh,FE_vector_1)
U_Space=dfx.FunctionSpace(mesh,FE_vector_2)
p_Space=dfx.FunctionSpace(mesh,FE_scalar)
Space  =dfx.FunctionSpace(mesh,FE_vector_2*FE_scalar)
V = dfx.Function(V_Space)
U = dfx.Function(U_Space)
p = dfx.Function(p_Space)
# Step 5 : jiggle indexing
idcs = np.argsort(mesh.geometry.input_global_indices).astype('int32')
vec_idcs = np.repeat(3*idcs,3)
vec_idcs[1::3]+=1
vec_idcs[2::3]+=2
# Step 6 : map one onto the other
V.vector[vec_idcs] = in_mesh.point_data['U'].flatten()
p.vector[idcs] = in_mesh.point_data['p']
# Step 7: interpolation
U.interpolate(V)
# Step 8 : write result as mixed
q = dfx.Function(Space)
_, map_U = Space.sub(0).collapse(collapsed_dofs=True)
_, map_p = Space.sub(1).collapse(collapsed_dofs=True)
q.vector[map_U]=U.vector
q.vector[map_p]=p.vector
viewer = pet.Viewer().createMPIIO("../cases/nozzle/baseflow/dat_complex/baseflow_S=0.000.dat", 'w', COMM_WORLD)
q.vector.view(viewer)

Also notice OpenFOAM v2112 & Paraview 5.10 output includes a entry that causes meshio to fail. If this line is removed in the .xmf file produced by Paraview however, above code runs smoothly.

Apologies for reviving this topic but it would appear that the above code spectacularly fails in parallel. I was especially surprised to notice that p.vector.getOwnershipRange() does not yield the same length as idcs.size, thus complicating the mapping between p.vector and in_mesh.point_data['p'].

I’m confused. I expected mesh.geometry.input_global_indices to return the indexes of locally owned dofs and that to concur with indexes in an order 1 FunctionSpace. Since the sum of all the ownership ranges still returns total vector size, I guess there are ghost values in mesh.geometry.input_global_indices pestering me.

Right now, I’m trying to jiggle my way around the indices again.

Try to use p.x.array instead of p.vector, as p.x.array includes all ghost indices as well.

1 Like

Many thanks ! It works with a simpler code :

ids = mesh.geometry.input_global_indices
# Map OpenFOAM data directly onto dolfinx vectors
p.x.array[:] = in_mesh.point_data['p'][ids]

Now about that final part… Assigning the values to the MixedSpace using :

_, map_U = Space.sub(0).collapse(collapsed_dofs=True)
_, map_p = Space.sub(1).collapse(collapsed_dofs=True)
q.vector[map_U]=U.vector
q.vector[map_p]=p.vector

Or using q.split() don’t work too well. That part is more like bonus - I could do without it. But it would be nice if I could get it saved as a MixedSpace

I’m also suprised u,p=q.split(); p.x.array is so large. I would’ve expected something comparable to a Function on the respective subspace…

Without a minimal example to reproduce the error it is quite hard to give you any guidance.

Please note that in the code you are presenting you are still using q.vector, which I would advice you not to do do.

Please note that you can get the local range of p by calling

import dolfinx as dfx
from mpi4py.MPI import COMM_WORLD

mesh = dfx.mesh.create_unit_square(COMM_WORLD, 1, 1)
V = dfx.fem.FunctionSpace(mesh, ("CG", 1))
p = dfx.fem.Function(V)
print(COMM_WORLD.rank, p.x.array[:p.function_space.dofmap.index_map.size_local
      * p.function_space.dofmap.index_map_bs])
print(COMM_WORLD.rank, p.x.array)

which returns:

0 [0. 0.]
0 [0. 0. 0. 0.]
1 [0. 0.]
1 [0. 0. 0. 0.]

Yes I understand it’s a bad idea to use .vector now. Allow me to elaborate and share an MWE :

import meshio, ufl
import numpy as np
import dolfinx as dfx
from dolfinx.io import XDMFFile
from mpi4py.MPI import COMM_WORLD as comm
from petsc4py import PETSc as pet

p0=comm.rank==0

if p0:
    # Read mesh and point data
    openfoam_data = meshio.read("front.xmf")
    # Write it out again
    ps=openfoam_data.points[:,:2]
    cs = openfoam_data.get_cells_type("quad")
    dolfinx_fine_mesh = meshio.Mesh(points=ps, cells={"quad": cs})
    meshio.write("nozzle.xdmf", dolfinx_fine_mesh)
else: openfoam_data=None
openfoam_data = comm.bcast(openfoam_data, root=0) # openfoam_data available to all but not distributed
# Read it again in dolfinx - now it's a dolfinx object
with XDMFFile(comm, "nozzle.xdmf", "r") as file:
    dolfinx_fine_mesh = file.read_mesh(name="Grid")

# Create FiniteElement, FunctionSpace & Functions
FE_vector_1=ufl.VectorElement("Lagrange",dolfinx_fine_mesh.ufl_cell(),1,3)
FE_vector_2=ufl.VectorElement("Lagrange",dolfinx_fine_mesh.ufl_cell(),2,3)
FE_scalar=ufl.FiniteElement("Lagrange",dolfinx_fine_mesh.ufl_cell(),1)
V1=dfx.FunctionSpace(dolfinx_fine_mesh,FE_vector_1)
V2=dfx.FunctionSpace(dolfinx_fine_mesh,FE_vector_2)
W=dfx.FunctionSpace(dolfinx_fine_mesh,FE_scalar)
u1, u2 = dfx.Function(V1), dfx.Function(V2)
p = dfx.Function(W)

# Global indexes owned locally
ids = dolfinx_fine_mesh.geometry.input_global_indices
# Map OpenFOAM data directy onto dolfinx vectors
u1.x.array[:] = openfoam_data.point_data['U'][ids,:].flatten()
p.x.array[:]  = openfoam_data.point_data['p'][ids]
# Interpolation to higher order
u2.interpolate(u1)
with XDMFFile(comm, "sanity_check_parallel.xdmf", "w") as xdmf:
    xdmf.write_mesh(dolfinx_fine_mesh)
    xdmf.write_function(u2)
# Write result as mixed
TH = dfx.FunctionSpace(dolfinx_fine_mesh,FE_vector_2*FE_scalar)
_, dofs_U = TH.sub(0).collapse(collapsed_dofs=True)
_, dofs_p = TH.sub(1).collapse(collapsed_dofs=True)
q = dfx.Function(TH)
q.x.array[dofs_U]=u2.x.array
q.x.array[dofs_p]=p.x.array

viewer = pet.Viewer().createMPIIO("baseflow.dat", 'w', comm)
q.vector.view(viewer)
viewer = pet.Viewer().createMPIIO("baseflow.dat", 'r', comm)
q.vector.load(viewer)
u,p=q.split()
with XDMFFile(comm, "sanity_check_parallel2.xdmf", "w") as xdmf:
    xdmf.write_mesh(dolfinx_fine_mesh)
    xdmf.write_function(u)

Used with this file, at the end of the execution of the above script I would like to get two identical files. I also want this to work in parallel. Right now, I think it works as intended up to writing the first file. I’ll try and use your hint to get the MixedElement to behave as I want it to. For now, I get SEGFAULTS, a garbage second file, or a zeros second file at best.

It would appear the above runs properly actually. Thanks again, this is much better in parallel…