Want to solve Poisson Eqn. on aHigh-order VTK .vtu mesh

,

Greetings
I am very new to the FeniCs project and do not even know what it can’t do.
I have a .vtu file which consists of 3-D unstructured mesh (high-order VTK hex & prisms) and fields of several variables on it. I want to read it on FeNiCs and solve a certain Poisson equation, the homogeneous term being a function of the some available variable fields. Is it possible to read in high-order VTK and solve the equation keeping the high-order accuracy?

Thanks for the response,
Kenan

There is currently no support for mixed higher order meshes in the FEniCS project.
This is being worked on in:

Single celltype geometries of higher order can be used in DOLFINx.

many thanks for the reply!

To avoid any problems related with it, i subdivided and tetrahedralized my mesh. My version of fenicsx is v0.4.1.2. Using the guide from https://jsdokken.com/dolfinx_docs/meshes.html , i try to load my mesh and create the dolfinx mesh. The following code fails with some “core dumped” error messages.

import pyvista as pv
import numpy as np
from mpi4py import MPI
from dolfinx.mesh import create_mesh
import ufl

# Read the .vtu file using PyVista
mesh_pv = pv.read("minimalExample.vtu")

# Extract points
points = np.array(mesh_pv.points)

# Extract cells (assuming tetrahedrons)
cells = mesh_pv.cells_dict[pv.CellType.TETRA]




# Define the cell type
cell_type = ufl.Mesh(ufl.VectorElement("Lagrange", ufl.tetrahedron, 1))

# Create the DolfinX mesh
mesh = create_mesh(MPI.COMM_WORLD, points,  cells, cell_type)

# Print mesh information
print(f"Number of cells: {mesh.topology.index_map(dolfinx.mesh.MeshEntityType.cell).size_local}")
print(f"Number of vertices: {mesh.topology.index_map(dolfinx.mesh.MeshEntityType.vertex).size_local}")

The resulting error message is:

malloc(): mismatching next->prev_size (unsorted)
[canavar:15660] *** Process received signal ***
[canavar:15660] Signal: Aborted (6)
[canavar:15660] Signal code:  (-6)
[canavar:15660] [ 0] /lib/x86_64-linux-gnu/libc.so.6(+0x43090)[0x7fe794b71090]
[canavar:15660] [ 1] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0xcb)[0x7fe794b7100b]
[canavar:15660] [ 2] /lib/x86_64-linux-gnu/libc.so.6(abort+0x12b)[0x7fe794b50859]
[canavar:15660] [ 3] /lib/x86_64-linux-gnu/libc.so.6(+0x8d26e)[0x7fe794bbb26e]
[canavar:15660] [ 4] /lib/x86_64-linux-gnu/libc.so.6(+0x952fc)[0x7fe794bc32fc]
[canavar:15660] [ 5] /lib/x86_64-linux-gnu/libc.so.6(+0x9835c)[0x7fe794bc635c]
[canavar:15660] [ 6] /lib/x86_64-linux-gnu/libc.so.6(__libc_malloc+0x1b9)[0x7fe794bc8299]
[canavar:15660] [ 7] /lib/x86_64-linux-gnu/libstdc++.so.6(_Znwm+0x19)[0x7fe77d915b29]
[canavar:15660] [ 8] /lib/x86_64-linux-gnu/libdolfinx_real.so.0.4(+0x1bb3ac)[0x7fe774b663ac]
[canavar:15660] [ 9] /lib/x86_64-linux-gnu/libdolfinx_real.so.0.4(_ZN7dolfinx4mesh11create_meshEP19ompi_communicator_tRKNS_5graph13AdjacencyListIlEERKNS_3fem17CoordinateElementERKN2xt17xtensor_containerINSC_7uvectorIdN5xsimd17aligned_allocatorIdLm16EEEEELm2ELNSC_11layout_typeE1ENSC_22xtensor_expression_tagEEENS0_9GhostModeERKSt8functionIFNS4_IiEES2_iiS7_SO_EE+0x3d8)[0x7fe774b66978]
[canavar:15660] [10] /usr/lib/petsc/lib/python3/dist-packages/dolfinx/cpp.cpython-38-x86_64-linux-gnu.so(+0x95a99)[0x7fe774c97a99]
[canavar:15660] [11] /usr/lib/petsc/lib/python3/dist-packages/dolfinx/cpp.cpython-38-x86_64-linux-gnu.so(+0x224b10)[0x7fe774e26b10]
[canavar:15660] [12] python3(PyCFunction_Call+0x59)[0x5d5499]
[canavar:15660] [13] python3(_PyObject_MakeTpCall+0x296)[0x5d6066]
[canavar:15660] [14] python3(_PyEval_EvalFrameDefault+0x6329)[0x54ce69]
[canavar:15660] [15] python3(_PyEval_EvalCodeWithName+0x26a)[0x54552a]
[canavar:15660] [16] python3(_PyFunction_Vectorcall+0x393)[0x5d5a23]
[canavar:15660] [17] python3(_PyEval_EvalFrameDefault+0x725)[0x547265]
[canavar:15660] [18] python3(_PyEval_EvalCodeWithName+0x26a)[0x54552a]
[canavar:15660] [19] python3(PyEval_EvalCode+0x27)[0x684327]
[canavar:15660] [20] python3[0x673a41]
[canavar:15660] [21] python3[0x673abb]
[canavar:15660] [22] python3[0x673b61]
[canavar:15660] [23] python3(PyRun_SimpleFileExFlags+0x197)[0x6747e7]
[canavar:15660] [24] python3(Py_RunMain+0x212)[0x6b4072]
[canavar:15660] [25] python3(Py_BytesMain+0x2d)[0x6b43fd]
[canavar:15660] [26] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf3)[0x7fe794b52083]
[canavar:15660] [27] python3(_start+0x2e)[0x5da67e]
[canavar:15660] *** End of error message ***
Aborted (core dumped)

Am I doing something stupid here? :smiling_face_with_tear:

I would need the mesh file to inspect it for errors. Likely there are vertices in the input geometry that are not present in the cell connectivity.

Here is the link to the file: https://seafile.cloud.uni-hannover.de/f/600d516b764a405881e3/?dl=1

I simply clipped the original mesh on Paraview, and then tetrahedralized it to have a tetra-only mesh. “clean to grid” filter was also useful to merge the common nodes on the neighboring cells.

Thanks again! :smiling_face:

Wrong order in constructor (i.e. cells is sent in before points).
MWE:

import pyvista as pv
import numpy as np
from mpi4py import MPI
from dolfinx.mesh import create_mesh
import ufl

# Read the .vtu file using PyVista
mesh_pv = pv.read("minimalExample.vtu")

# Extract points
points = np.array(mesh_pv.points)

# Extract cells (assuming tetrahedrons)
cells = mesh_pv.cells_dict[pv.CellType.TETRA]




# Define the cell type
import basix.ufl
el = basix.ufl.element("Lagrange", "tetrahedron", 1, shape=(3,))
cell_type = ufl.Mesh(el)

# Create the DolfinX mesh
mesh = create_mesh(MPI.COMM_WORLD, cells, points, cell_type)

# Print mesh information
print(f"Number of cells: {mesh.topology.index_map(mesh.topology.dim).size_local}")
print(f"Number of vertices: {mesh.topology.index_map(mesh.topology.dim).size_local}")

oh, i see. i would not be able to find the problem, frankly. thanks!

now, i can even handle the large mesh.

I have another problem: What if I also want to load a field from the .vtu file for use in the source terms later?

What i did was

#define fem function space
V = fem.functionspace(domain, ("Lagrange", 1))

# Create a dolfinx Function to hold the field data
u_pressure = fem.Function(V)
# Assuming field_data is ordered according to the vertices
field_data=mesh_pv.point_data["Pressure"]

which resulted in a messy field as i wrote it in another .vtu file. Then I fixed it with:

# Map the field data to the dolfinx vertices
# Create a KDTree for efficient nearest-neighbor search
from scipy.spatial import KDTree

kdtree = KDTree(points)
_, vertex_indices = kdtree.query(domain.geometry.x)

u_pressure.vector.array[:] = field_data[vertex_indices]

, which seems to work. Is there an elegant way to read a field, without using KDtrees? For large problems, this may be a little prohibitive.

Thanks in advance,
Kenan

I would suggest that you have a look at: Reading node based data from "Legacy" XDMFFile · Issue #16 · jorgensd/adios4dolfinx · GitHub as it describes the logic given XDMF/h5 files.
The logic there uses some functions from adios4dolfinx to ensure that the code would work in serial and parallel. Please note that vtu files are not well-suited for parallel reading, as it is hard/impossible to partition the data when reading it.

1 Like

I can also export my msh as .pvtu, that is, as an already partitioned mesh from my solver. I reckon though, it could be really painstaking to get dolfinx to know the connectivitiy between the partitions. another thing i can do is to save the vtu (or pvtu) as xdmf on Paraview.

There is also another impeding issue: definition of periodic boundary condition on dolfinx. Given that, I think i will give up because this turned out to be more complicated than i thought at first…

Periodic conditions are supported with dolfknx_mpc. See for instance

1 Like