Quad mesh nodes reordered upon reading XDMF file

It appears mesh nodes are reordered when an .xdmf file is read. Given this mesh.xdmf as input:

<Xdmf Version="3.0">
  <Domain>
    <Grid Name="Grid">
      <Geometry GeometryType="XYZ">
        <DataItem DataType="Float" Dimensions="4 3" Format="XML" Precision="8">
          1.0 1.0 0.0
          2.0 -1.0 0.0
          -3.0 -1.0 0.0
          -4.0 1.0 0.0
        </DataItem>
      </Geometry>
      <Topology NumberOfElements="1" TopologyType="Quadrilateral">
        <DataItem DataType="Int" Dimensions="1 4" Format="XML" Precision="8">
          0 1 2 3
        </DataItem>
      </Topology>
    </Grid>
  </Domain>
</Xdmf>

Then running this MWE demonstrates that the nodes are re-ordered:

import dolfinx
from dolfinx.io import XDMFFile
from mpi4py import MPI

# read mesh
fileName = "mesh.xdmf"
with dolfinx.io.XDMFFile(MPI.COMM_WORLD, fileName, "r", encoding=XDMFFile.Encoding.ASCII) as xdmf:
    mesh = xdmf.read_mesh(name="Grid")

# show nodal ordering
print(mesh.geometry.x)

Output:

[[ 1.  1.  0.]
 [ 2. -1.  0.]
 [-4.  1.  0.]
 [-3. -1.  0.]]

So nodes [0 1 2 3] appear to have been reordered to [0 1 3 2]. For whatever it’s worth, this does not appear to occur (at least for a single-element mesh) in the case of triangular elements.

Is this expected behavior? Said re-ording appears to occur on a pretty large scale for more complex meshes, which makes it much more difficult (unnecessarily so?) to integrate FEniCS alongside/within other tools/workflows.

I encountered this PR poking around in GitHub. Seems like it might be related, though I see that particular PR is relatively old and was not merged–I don’t see any similar tests in the current version of dolfinx.

1 Like

Most software reorder meshes (the topology) (usually to be able to use higher order Nedelec/DG spaces, see for instance the introduction of: [2102.11901] Construction of arbitrary order finite element degree-of-freedom maps on polygonal and polyhedral cell meshes). However, in DOLFINx, we use dof-transformations (see said paper, which in theory means that you do not need to reorder the mesh topology).

The re-ordering you see in the geometry, is to increase data-locality, see: https://github.com/FEniCS/dolfinx/blob/23c76248f0686a795f170927036f1331416c1876/cpp/dolfinx/mesh/Geometry.cpp#L73-L105
@garth or @chris can probably provide you with more information regarding this.

It would be helpful to know what kind of software you would like to interface with, and how you were thinking about making this interface (if we do not consider the reordering of the geometry).

1 Like

Thanks for educating me on some of the background here–I can see where said remapping may be beneficial from a numerical/computational perspective.

For my specific application I am using external software to wrap DOLFINx which is used as a structural FEA solver. The external software generates the mesh and outputs to .xdmf, DOLFINx is used to solve the structural mechanics problem, nodal displacements (or other quantities) are output using interpolation as discussed here, and the nodal displacements are communicated back to the external software. The external software then performs further post-processing of the solution; however, this becomes more complicated now that the original node definitions and the displacements of interest do not agree in terms of ordering. Perhaps there is a reasonably simple way to map back to the original ordering? Given that DOLFINx is doing the initial remapping, it would make sense–at least in theory–that it could facilitate the reverse. Otherwise I have to do some kind of post-hoc remapping via something like a point-to-point search which is clearly less than ideal for a problem of appreciable size.

So as you are reading in xdmf to dolfinx, why don’t you use the xdmffile for outputting?
I.e.

with dolfinx.io.XDMFFile(mesh.comm, “output.xdmf”,”w”) as xdmf:
    xdmf.write_mesh(mesh)
    xdmf.write_function(u)

This would give you and XDMF file where the mesh geometry and the function data aligns. (Note that this is only true for CG 1 spaces, for CG2 spaces this is efficiently an interpolation into CG1)

I guess there are two answers to this:

  1. Yes, I could theoretically use this (xdmf) approach or some other approach in which I adopt the dolfinx-defined mesh topology over my original topology. The fundamental issue here is that this is “workflow-breaking” in the sense that quantities, assumptions, etc. that are valid for the upstream topology are no longer valid for the downstream topology, which has implications for the various post-processing operations I perform. This could probably be overcome with a fair amount of effort but would be a difficult pill to swallow.

  2. Even if I wanted to use the xdmf approach, the line xdmf.write_function(u) has been giving me the below error in some cases (running in Docker with a couple different quad meshes). I guess this is really a different topic (discussed here and elsewhere?) for which I don’t have a MWE at present, but it’s worth noting.

Traceback (most recent call last):
File “/root/testModel/fenics/run.py”, line 115, in
file.write_function(u)
File “/usr/local/dolfinx-real/lib/python3.8/dist-packages/dolfinx/io.py”, line 51, in write_function
super().write_function(getattr(u, “_cpp_object”, u), t, mesh_xpath)
RuntimeError: Newton method failed to converge for non-affine geometry

For reference, I have similarly wrapped commercial FEA software and this consistency between input geometry and output results has not been an issue. I realize that dolfinx is not–nor is it it intended to be–commercial software, but this doesn’t seem like a terribly unusual use case, though I could be wrong. Perhaps there is no simple way around this for now.

what kind of post-processing steps are we talking about here, could any of them be performed in DOLFINx?

As I stated above, the reordering happens due to the fact that one wants to have data-locality, leading to faster assembly times for large problems.

As dolfinx is open source, nothing is stopping you from removing the remapping, and see if the mesh geometry is not reordered then.

I’m also interested in what kind of finite elements you are using and how you would do this post processing when running in parallel.

what kind of post-processing steps are we talking about here, could any of them be performed in DOLFINx?

A variety of structural failure analyses–can’t get into too much detail unfortunately due to intellectual property issues. Likely much (all?) of the processing could be performed in DOLFINx but that would require porting of a significant amount of code.

As I stated above, the reordering happens due to the fact that one wants to have data-locality, leading to faster assembly times for large problems.

Understood. Again, makes sense.

As dolfinx is open source, nothing is stopping you from removing the remapping, and see if the mesh geometry is not reordered then.

Nothing is stopping me other than my own ability to do so :slight_smile: . But sure, fair point.

I’m also interested in what kind of finite elements you are using and how you would do this post processing when running in parallel.

I am working with a shell formulation along the lines of Linear shell model — Numerical tours of continuum mechanics using FEniCS master documentation, so largely CG2 elements for the displacements. I have not given much thought to parallel post-processing thus far–there may be long-term value in that but it is not of immediate concern to me.

Overall I think I have my answer. It sounds like the nodal reordering is more-or-less baked into dolfinx intentionally and I’ll have to explore some different options for coping. Thanks for providing some ideas on that front and for the discussion.

Note that if you want to have the CG 2 output, you should not directly link it to hour mesh (as I assume it is a first order mesh). You would efficiently do a CG-1 interpolation of your solution field, loosing alot of information.

What ~size does your mesh have (with respect to cells and vertices). If it is reasonably small you could read in the original geometry (using h5py) and compute the permutation array of the geometry.

Note that if you want to have the CG 2 output, you should not directly link it to hour mesh (as I assume it is a first order mesh). You would efficiently do a CG-1 interpolation of your solution field, loosing alot of information.

Yes, I realize CG2 → CG1 is a lossy operation–that’s satisfactory for me for the current application.

What ~size does your mesh have (with respect to cells and vertices). If it is reasonably small you could read in the original geometry (using h5py) and compute the permutation array of the geometry.

Intending to run a variety of problems, but up to ~250k cells and vertices. I did implement a point-to-point search to construct said permutation array; for smaller problems it performs reasonably and gives me the desired result, but for those larger problems it will scale poorly. I presume there are more sophisticated algorithms out there, just seems like quite a relatively complicated post-hoc solution simply to revert to the topology I input originally.

Have you tried to use numba for this? As you are simply accessing two numpy arrays is should be fairly straightforward to write a sorting function with it.

I have not looked into numba before–indeed seems like it could be helpful here. Thanks for the tip!

The function mesh::Geometry::input_global_indices returns the ‘input’ global index. (C++ :Mesh (dolfinx::mesh) — DOLFINx 0.3.1 documentation, Python: dolfinx.cpp.mesh — DOLFINx 0.3.1.0 documentation). With this you can figure out the relationship between the re-ordered and input indices.

3 Likes

Thank you @garth, that is absolutely what I was looking for and solved my problem!

Just to fully close the loop for anybody else who might encounter this in the future, mesh.geometry.input_global_indices apparently provides the indexing map to convert from the input nodal array → the nodal array of the mesh within fenics. To get the inverse (fenics array → input array) we can use numpy.argsort(mesh.geometry.input_global_indices). So my MWE would be:

import dolfinx
from dolfinx.io import XDMFFile
from mpi4py import MPI
import numpy as np

# read mesh
fileName = "mesh.xdmf"
with dolfinx.io.XDMFFile(MPI.COMM_WORLD, fileName, "r", encoding=XDMFFile.Encoding.ASCII) as xdmf:
    mesh = xdmf.read_mesh(name="Grid")

# show nodal order--adjusted back to input
idcs = np.argsort(mesh.geometry.input_global_indices)
print(mesh.geometry.x[idcs,:])
[[ 1.  1.  0.]
 [ 2. -1.  0.]
 [-3. -1.  0.]
 [-4.  1.  0.]]

We can see here that the order of the nodes is the same as what was provided in the my input .xdmf.

2 Likes

@garth
Do we have similar command for obtaining original cell_tags (subdomains) like below, associated when reading mesh using .msh after reordering.
dom, subdomains, boundaries = gmshio.read_from_msh("circle.msh", MPI.COMM_WORLD,0, gdim=3)

You can use mesh.topology.original_cell_index.

1 Like