Render function with pyvista during parallel execution

Simulations can take a long time, and I’d like to see if the results are heading in the right direction without waiting for it to finish. This is why I’d like to render the current state using pyvista every n simulation steps. However, I’m having trouble getting this to work with MPI.

Consider the following minimal example:

from mpi4py import MPI
from dolfinx import *

nx = 20

msh = mesh.create_rectangle(comm, points=((-1.5, -1.0), (1.5, 1.0)), n=(nx, nx), cell_type=mesh.CellType.triangle)

V = fem.FunctionSpace(msh, ("CG", 1))

uh = fem.Function(V)
uh.interpolate(lambda x: x[0]+x[1])

if comm.rank == 0:
        import pyvista
        cells, types, x = plot.create_vtk_mesh(msh)
        grid = pyvista.UnstructuredGrid(cells, types, x)
        grid.point_data["u"] = uh.x.array.real
        plotter = pyvista.Plotter()
        plotter.add_mesh(grid, show_edges=True)
    except ModuleNotFoundError:
        print("'pyvista' is required to visualise the solution")
        print("Install 'pyvista' with pip: 'python3 -m pip install pyvista'")

Running this without MPI or with mpirun -n 1 gives:

However, when I run the code with mpirun -n 4:

Only the subdomain of rank 0 gets rendered, which shouldn’t be a surprise. But how can I fix this? What’s the easiest way to piece together the function from all rank’s subdomains?

You would need to gather the mesh data produced by

on the first process, and similarly for the uh.x.array, it has to be garhered on rank 0.

Yes, thanks. But how exactly can I do that?
I’ve gathered simple values before, such as the global maximum of a function:
comm.reduce(uh.vector.max()[1], MPI.MAX)
But I can’t figure out how to gather the function itself.

I’ve for instance shown it here (using u.vector and not u.x.array): Gather solutions in parallel in FEniCSX - #2 by dokken

In your case you should use u.x.array as you need the ghost dofs to properly render on each component of the mesh