Distribute sequential mesh and function

Hello,

Let’s say I have a mesh and a function on a single processor, e.g.

import numpy as np
from mpi4py import MPI
import dolfinx.mesh, dolfinx.fem as fem

n = 100
mesh = dolfinx.mesh.create_unit_square(MPI.COMM_SELF, n, n)
V = fem.FunctionSpace(mesh, ("CG", 1))
f = fem.Function(V)
f.interpolate(lambda x: np.sin(x[0]))

How can I distribute mesh and f on multiple processes for further parallel processing?

You need to be more specific about what you want to do in parallel for any response to make sense.

DOLFINx used MPI for parallelization, which starts at the mesh level, partitioning it over multiple processes. In turn functions are distributed wrt. that partition.

Thus if you want a distributed mesh, you should use mpirun -n N application where N is the number of processes to distribute your problem.

Let me add some context to clarify my question. The reason I am starting with a single processor code, is that I am loading data from a file that does not support parallel I/O. In my previous example, f can be seen as a PDE coefficient whose nodal values are stored in a csv file. In a single process context, I can simply create a function f and load the values into f.x.array (modulo some dolfinx reordering). Regarding the mesh, it is stored as a XDMF file and can be read in parallel.

Now, if I want to solve my problem using MPI, I would need to load the f values inside the distributed vector, but I don’t know how to do so.

On a single process the code would look like this (pseudocode):

# load mesh
mesh_xdmf = dolfinx.io.XDMFFile(MPI.COMM_SELF, "mesh.xdmf", "r")
mesh = mesh_xdmf.read_mesh(name="Grid")

# load f values
f_dofs = read_csv("f.csv")
f_dofs = reorder_dolfinx(f_dofs)

V = fem.FunctionSpace(mesh, ("CG", 1))
f = fem.Function(V)
f.vector.setArray(f_dofs)

# setup problem and solve ...

A parallel code would look like this

# run with mpirun

# load mesh in parallel
mesh_xdmf = dolfinx.io.XDMFFile(MPI.COMM_world, "mesh.xdmf", "r")
mesh = mesh_xdmf.read_mesh(name="Grid")

V = fem.FunctionSpace(mesh, ("CG", 1))
f = fem.Function(V) # f.vector is a distributed vector

if rank == 0:
    # read csv on a single process
    f_dofs = read_csv("f.csv")
    
    # perform some MPI communications
    my_dofs = distribute_dofs(f)
    with f.vector.localForm() as f_loc:
        f_loc.setArray(f_dofs)
else:
    my_dofs = receive_dofs(f)
    with f.vector.localForm() as f_loc:
        f_loc.setArray(f_dofs)

# setup problem and solve ...

As you can see, once I load the CSV file on rank 0, I need to distribute it according to the partitioning scheme that dolfinx is using for the distributed mesh/function. However I’m not sure if there is a nice and easy way to do so, i.e. some helper functions to perform the mpi communication.

See for instance: Gather solutions in parallel in FEniCSX - #2 by dokken

Thank you! This is what I was looking for.

Here is my solution adapted from @dokken 's comment, in case it can help somebody in the future. This is for reading a function from a XDMF file into a MPI distributed dolfinx function. It should only work for DG0 functions with block_size=1 but the code should be straightforward to modify to handle more cases.

import numpy as np
from mpi4py import MPI
import meshio
import dolfinx.fem as fem, dolfinx.mesh, dolfinx.io

def read_dg0_function_parallel(xdmf_filename: str, function_name: str, f: fem.Function):
    """Read a scalar valued DG0 function from a file and
    distribute it to a MPI Function.

    Args:
        xdmf_filename (str): path to the file where the function is stored
        function_name (str): name of the function to read
        f (fem.Function): dolfinx function to store the read values
    
    Returns:
        None
    """
    mesh = f.function_space.mesh
    comm = mesh.comm

    rank = comm.rank
    local_cell_indices = np.asarray(mesh.topology.original_cell_index, dtype=np.int32)
    ranks_cell_indices = comm.gather(local_cell_indices, root=0)
        
    send_buf = []
    if rank == 0:
        # read mesh
        mesh_meshio = meshio.read(xdmf_filename)
        meshio_fn = mesh_meshio.cell_data[function_name][0]

        for rank_cell_indices in ranks_cell_indices:
            send_buf.append(meshio_fn[rank_cell_indices])

    local_dof_values = comm.scatter(send_buf, root=0)
    with f.vector.localForm() as f_loc:
        f_loc.setArray(local_dof_values)

Usage :

def main():
    comm = MPI.COMM_WORLD
    rank = comm.rank

    xdmf_mesh_file = "mesh.xdmf"
    mesh_xdmf = dolfinx.io.XDMFFile(MPI.COMM_WORLD, xdmf_mesh_file, "r")
    mesh = mesh_xdmf.read_mesh(name="Grid")
    
    V = fem.FunctionSpace(mesh, ("DG", 0))
    f = fem.Function(V, name="f")

    read_dg0_function_parallel(xdmf_mesh_file, "f", f)