Re-develop on FEniCS/FEniCSx's mpi/DDM method

Hello, I wonder how can I access to the domain decomposition rule implemented in FEniCSx or communication between partitioned mesh domains.
Say, I’m trying to apply different transmission condition to partitioned subdomains to accelerate convergency or take DDM as preconditioners and make indefinite problems more stable. I think MPIRUN partition mesh to cores and keep them communicated with ghost stuff (please correct me if wrong)? But underlying solving techinique for DDM and ghost communication in FEniCS or FEniCSx are not metioned much in any introduction I found.
May someone suggest any webpage on re-development on FEniCSx’s domain decomposition or other parallel technique? Appreciate a lot.

FEniCSx’s components are open source. Particularly for your question, dolfinx. You’re welcome to fork and modify the code so to develop your research project as you wish. Pertinent to partitioning meshes to MPI processes is dolfinx::graph see e.g. here. The code is (hopefully) well documented such that someone with knowledge of MPI and FEMs will be well prepared.

1 Like

Thanks nate, your reply is really helpful. May I ask a few more questions:

  1. How extra DOFs for parallel code (BC on interior boundaries between submeshes, whether overlapping or non-overlapping ddm is used) are treat in FEniCSx? These codes lie in which part of dolfinx project?
  2. Did you know any open project about DDM-precondition implemented in FEniCS? Really wonder how FEniCS assembles it to a block diagonal matrix.
  3. What is the maximum problem size(DOFs or mesh numbers) supported by FEniCS? Does this suit a supercomputer framework?
    Since I’m a rookie in MPI and modifying FEniCS, any related answer will be of great help. Thanks!

Your questions 1 & 2 form a research project and should likely be answered by studying the esoteric method you’re interested in, along with FEniCSx’s source code.

The answer to question 3 is: As many as your computer can handle. A good number of people have used FEniCS for modelling physical systems with HPCs, naturally I’ll cite mine and co-workers’ here.

1 Like

To add to @nate’s reply, you can see scaling of dolfinx at: https://fenics.github.io/performance-test-results/
based on
GitHub - FEniCS/performance-test: Mini App for FEniCS-X performance testing

In DOLFINx, we have the concept of ghost entities, such as ghost vertices, edges, facets and cells.
Some work towards domain decomposition with Dolfinx has been started by @IgorBaratta in:

2 Likes

Appreciate you all! Thanks.