Cxx version of dolfinx

TL;DR: How can I translate the python poisson demo to cpp version?
I have fully set up a solution to differential equations in the python version of dolfinx and I can successfully solve this problem for a coarse mesh in a local computer where I have root access. I need to translate the solution into the cpp version. The reason that I need to translate it is that I’m running into issues installing the python wrappers into a compute cluster where I do not have root access. The docker version installs alright but raises errors when building the functionspace items. I have checked out the cpp demos, but it is still unclear what goes into what file and would appreciate tips on how to go about it. I am loading the mesh from an xdmf file. I understand that the form specification needs to go into a different file so that it can be run compiled with ffcx, but it is not immediately clear why I need to build using cmake, or how to refer to the form specification while inside the cpp file of the code. As an example, the poisson_matrix_free contains and main.cpp.

When you run ffcx (Or python3 -m ffcx on the poisson.pyfile it generates c-code that is compiled when calling cmake.

This means that you can generate these files locally on your computer and copy them over to a cluster/HPC system.

You can see these c-files in the CMakeFile:

If you then consider the main.cpp, you observe that it includes poisson.h (which is generated by ffcx)

The main file uses functions from the header file when creating function spaces (dolfinx/main.cpp at main · FEniCS/dolfinx · GitHub), defining forms dolfinx/main.cpp at main · FEniCS/dolfinx · GitHub and similar operations.

As a side note, have you tried using spack to install dolfinx on your HPC system? It does not require root access.

Similarly, you could use singularity based on the docker images.

1 Like

Actually, used singularity to convert the docker image to a machine image and do module load dolfinx then use singularity to run the relevant python commands to submit a batch job. I tried spack install when I was a newbie to the cluster and it maxed out on the shared temp directory on the login node plus it seemed to install some of the modules that were already existing (because they were behind by a point version or two). That might actually be my best option so I will try again. Thank you for the feedback!

I tried the spack install, which goes well up until it needs c++11 compilers for some of the modules. Adding new spack compilers takes a considerable amount of time. I have chosen to translate my code into cpp, but running into issues with the mesh loading from xdmf.

#include <basix/finite-element.h>
#include <boost/program_options.hpp>
#include <cmath>
#include "conduction.h"
#include <dolfinx.h>
#include <dolfinx/io/XDMFFile.h>
#include <xtensor/xarray.hpp>
#include <xtensor/xtensor.hpp>

using namespace dolfinx;
using namespace std;
namespace po = boost::program_options;

int main(int argc, char* argv[])
dolfinx::init_logging(argc, argv);
  MPI_Init(&argc, &argv);

    using T = PetscScalar;

    MPI_Comm comm = MPI_COMM_WORLD;

    // Create mesh and function space
    io::XDMFFile infile(comm, "mesh_tetr.xdmf", "r");
    auto mesh = std::make_shared<mesh::Mesh>();
    mesh = infile.read_mesh(??, mesh::GhostMode::none, "Grid");

I’m unclear on the first argument to read_mesh. My cell type is tetrahedron. Could you point me to any of the cpp tutorials that load tetrahedral mesh from xdmf file? I figured the first argument is CoordinateElement but I’m not familiar with basix yet.

If you consider the C++ API you can see it needs a coordinate element. Unfortunately due to a bug in the API the dolfinx::fem::CoordinateElement is not exposed.
However, you can have a look at the source code: dolfinx/CoordinateElement.h at 3f8fa273315b904a9f394ea3d7072e7b940997fd · FEniCS/dolfinx · GitHub
which states that you need to supply the cell type and the degree of the input mesh.

EDIT You can find it through the non-experimental documentation

1 Like