So nodes [0 1 2 3] appear to have been reordered to [0 1 3 2]. For whatever itâs worth, this does not appear to occur (at least for a single-element mesh) in the case of triangular elements.
Is this expected behavior? Said re-ording appears to occur on a pretty large scale for more complex meshes, which makes it much more difficult (unnecessarily so?) to integrate FEniCS alongside/within other tools/workflows.
I encountered this PR poking around in GitHub. Seems like it might be related, though I see that particular PR is relatively old and was not mergedâI donât see any similar tests in the current version of dolfinx.
It would be helpful to know what kind of software you would like to interface with, and how you were thinking about making this interface (if we do not consider the reordering of the geometry).
Thanks for educating me on some of the background hereâI can see where said remapping may be beneficial from a numerical/computational perspective.
For my specific application I am using external software to wrap DOLFINx which is used as a structural FEA solver. The external software generates the mesh and outputs to .xdmf, DOLFINx is used to solve the structural mechanics problem, nodal displacements (or other quantities) are output using interpolation as discussed here, and the nodal displacements are communicated back to the external software. The external software then performs further post-processing of the solution; however, this becomes more complicated now that the original node definitions and the displacements of interest do not agree in terms of ordering. Perhaps there is a reasonably simple way to map back to the original ordering? Given that DOLFINx is doing the initial remapping, it would make senseâat least in theoryâthat it could facilitate the reverse. Otherwise I have to do some kind of post-hoc remapping via something like a point-to-point search which is clearly less than ideal for a problem of appreciable size.
So as you are reading in xdmf to dolfinx, why donât you use the xdmffile for outputting?
I.e.
with dolfinx.io.XDMFFile(mesh.comm, âoutput.xdmfâ,âwâ) as xdmf:
xdmf.write_mesh(mesh)
xdmf.write_function(u)
This would give you and XDMF file where the mesh geometry and the function data aligns. (Note that this is only true for CG 1 spaces, for CG2 spaces this is efficiently an interpolation into CG1)
Yes, I could theoretically use this (xdmf) approach or some other approach in which I adopt the dolfinx-defined mesh topology over my original topology. The fundamental issue here is that this is âworkflow-breakingâ in the sense that quantities, assumptions, etc. that are valid for the upstream topology are no longer valid for the downstream topology, which has implications for the various post-processing operations I perform. This could probably be overcome with a fair amount of effort but would be a difficult pill to swallow.
Even if I wanted to use the xdmf approach, the line xdmf.write_function(u) has been giving me the below error in some cases (running in Docker with a couple different quad meshes). I guess this is really a different topic (discussed here and elsewhere?) for which I donât have a MWE at present, but itâs worth noting.
Traceback (most recent call last):
File â/root/testModel/fenics/run.pyâ, line 115, in
file.write_function(u)
File â/usr/local/dolfinx-real/lib/python3.8/dist-packages/dolfinx/io.pyâ, line 51, in write_function
super().write_function(getattr(u, â_cpp_objectâ, u), t, mesh_xpath)
RuntimeError: Newton method failed to converge for non-affine geometry
For reference, I have similarly wrapped commercial FEA software and this consistency between input geometry and output results has not been an issue. I realize that dolfinx is notânor is it it intended to beâcommercial software, but this doesnât seem like a terribly unusual use case, though I could be wrong. Perhaps there is no simple way around this for now.
what kind of post-processing steps are we talking about here, could any of them be performed in DOLFINx?
A variety of structural failure analysesâcanât get into too much detail unfortunately due to intellectual property issues. Likely much (all?) of the processing could be performed in DOLFINx but that would require porting of a significant amount of code.
As I stated above, the reordering happens due to the fact that one wants to have data-locality, leading to faster assembly times for large problems.
Understood. Again, makes sense.
As dolfinx is open source, nothing is stopping you from removing the remapping, and see if the mesh geometry is not reordered then.
Nothing is stopping me other than my own ability to do so . But sure, fair point.
Iâm also interested in what kind of finite elements you are using and how you would do this post processing when running in parallel.
Overall I think I have my answer. It sounds like the nodal reordering is more-or-less baked into dolfinx intentionally and Iâll have to explore some different options for coping. Thanks for providing some ideas on that front and for the discussion.
Note that if you want to have the CG 2 output, you should not directly link it to hour mesh (as I assume it is a first order mesh). You would efficiently do a CG-1 interpolation of your solution field, loosing alot of information.
What ~size does your mesh have (with respect to cells and vertices). If it is reasonably small you could read in the original geometry (using h5py) and compute the permutation array of the geometry.
Note that if you want to have the CG 2 output, you should not directly link it to hour mesh (as I assume it is a first order mesh). You would efficiently do a CG-1 interpolation of your solution field, loosing alot of information.
Yes, I realize CG2 â CG1 is a lossy operationâthatâs satisfactory for me for the current application.
What ~size does your mesh have (with respect to cells and vertices). If it is reasonably small you could read in the original geometry (using h5py) and compute the permutation array of the geometry.
Intending to run a variety of problems, but up to ~250k cells and vertices. I did implement a point-to-point search to construct said permutation array; for smaller problems it performs reasonably and gives me the desired result, but for those larger problems it will scale poorly. I presume there are more sophisticated algorithms out there, just seems like quite a relatively complicated post-hoc solution simply to revert to the topology I input originally.
Have you tried to use numba for this? As you are simply accessing two numpy arrays is should be fairly straightforward to write a sorting function with it.
Thank you @garth, that is absolutely what I was looking for and solved my problem!
Just to fully close the loop for anybody else who might encounter this in the future, mesh.geometry.input_global_indices apparently provides the indexing map to convert from the input nodal array â the nodal array of the mesh within fenics. To get the inverse (fenics array â input array) we can use numpy.argsort(mesh.geometry.input_global_indices). So my MWE would be:
import dolfinx
from dolfinx.io import XDMFFile
from mpi4py import MPI
import numpy as np
# read mesh
fileName = "mesh.xdmf"
with dolfinx.io.XDMFFile(MPI.COMM_WORLD, fileName, "r", encoding=XDMFFile.Encoding.ASCII) as xdmf:
mesh = xdmf.read_mesh(name="Grid")
# show nodal order--adjusted back to input
idcs = np.argsort(mesh.geometry.input_global_indices)
print(mesh.geometry.x[idcs,:])
@garth
Do we have similar command for obtaining original cell_tags (subdomains) like below, associated when reading mesh using .msh after reordering. dom, subdomains, boundaries = gmshio.read_from_msh("circle.msh", MPI.COMM_WORLD,0, gdim=3)