Can't create 1D mesh in parallel

Hi all,

I’m trying to make a 1D mesh in parallel and I get an error

Traceback (most recent call last):
  File "/workspaces/FESTIM/mwe_bug_three_vols.py", line 29, in <module>
    dolfinx.mesh.create_mesh(MPI.COMM_WORLD, cells, mesh_points, domain)
  File "/usr/local/dolfinx-real/lib/python3.12/dist-packages/dolfinx/mesh.py", line 653, in create_mesh
Traceback (most recent call last):
  File "/workspaces/FESTIM/mwe_bug_three_vols.py", line 29, in <module>
    dolfinx.mesh.create_mesh(MPI.COMM_WORLD, cells, mesh_points, domain)
  File "/usr/local/dolfinx-real/lib/python3.12/dist-packages/dolfinx/mesh.py", line 653, in create_mesh
    msh = _cpp.mesh.create_mesh(comm, cells, cmap._cpp_object, x, partitioner)
    msh = _cpp.mesh.create_mesh(comm, cells, cmap._cpp_object, x, partitioner)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: A facet is connected to more than two cells.
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: A facet is connected to more than two cells.

This is my MWE:

import numpy as np
import dolfinx.mesh
from mpi4py import MPI
import basix.ufl
import ufl

interface_1 = 0.5
interface_2 = 0.7

N = 200
vertices = np.concatenate(
    [
        np.linspace(0, interface_1, num=N),
        np.linspace(interface_1, interface_2, num=N),
        np.linspace(interface_2, 1, num=N),
    ]
)
vertices = np.sort(np.unique(vertices)).astype(float)

degree = 1
domain = ufl.Mesh(
    basix.ufl.element(basix.ElementFamily.P, "interval", degree, shape=(1,))
)

mesh_points = np.reshape(vertices, (len(vertices), 1))
indexes = np.arange(vertices.shape[0])
cells = np.stack((indexes[:-1], indexes[1:]), axis=-1)

dolfinx.mesh.create_mesh(MPI.COMM_WORLD, cells, mesh_points, domain)

It works fine in serial but I have issues when running with
mpirun -np N N>=2

I’m running this on the nightly build.

You need to send in a unique set of points (not all points and all cells on all processes.

This is for instance explained in
https://jsdokken.com/dolfinx_docs/meshes.html#mpi-communication

Thanks @dokken this is how to make it work using option 1 in the link you sent:

import numpy as np
import dolfinx.mesh
from mpi4py import MPI
import basix.ufl
import ufl


if MPI.COMM_WORLD.rank == 0:
    vertices = np.linspace(0, 1, num=200)
    mesh_points = np.reshape(vertices, (len(vertices), 1))
    indexes = np.arange(vertices.shape[0])
    cells = np.stack((indexes[:-1], indexes[1:]), axis=-1)

else:
    indexes = np.empty((0, 1), dtype=np.float64)
    cells = np.empty((0, 2), dtype=np.int64)

degree = 1
domain = ufl.Mesh(
    basix.ufl.element(basix.ElementFamily.P, "interval", degree, shape=(1,))
)
mesh = dolfinx.mesh.create_mesh(MPI.COMM_WORLD, cells, indexes, domain)
cell_index_map = mesh.topology.index_map(mesh.topology.dim)
print(
    f"Num cells local: {cell_index_map.size_local}\n Num cells global: {cell_index_map.size_global}"
)

Produces (np 3)

Num cells local: 66
 Num cells global: 199
Num cells local: 67
 Num cells global: 199
Num cells local: 66
 Num cells global: 199