Issues with saving mesh connectivity from parellel

I would like to save the mesh connectivity as a .txt file. I tried to gather the information to the main processor then tried to save from there. But it turns out I have made some mistakes, some rows are repeated in my output files. What mistakes did I make?

from __future__ import print_function
from dolfin import *
from ufl import nabla_div
import numpy as np
from mpi4py import MPI

mesh = BoxMesh(Point(-1, -1, -1), Point(1, 1, 1), 10, 10, 10)
V = VectorFunctionSpace(mesh, 'P', 1)

connectivity = mesh.cells()

comm = MPI.COMM_WORLD

gathered_connectivity = comm.gather(connectivity, root=0)
global_eldex = comm.gather(mesh.topology().global_indices(3))

if comm.rank==0:
    num_elements = mesh.num_entities_global(3)
    all_connectivity = np.zeros((num_elements, connectivity.shape[1]))
    for conn,  eldex in zip(gathered_connectivity, global_eldex):
         all_connectivity[eldex] = conn

    np.savetxt('mesh_connectivity_test1.txt', all_connectivity, fmt='%8i')

And some rows from my output txt file are:
0 1 2 3
0 1 4 3
0 5 4 3
0 6 2 3
0 5 7 3
0 6 7 3
1 8 9 10
1 8 11 10
1 4 11 10
1 2 9 10
1 4 3 10
1 2 3 10
8 12 13 14
8 12 15 14
8 11 15 14
8 9 13 14
8 11 10 14
8 9 10 14
12 16 17 18
12 16 19 18
12 15 19 18
12 13 17 18
12 15 14 18
12 13 14 18
16 20 21 22
16 20 23 22
16 19 23 22
16 17 21 22
16 19 18 22
16 17 18 22
0 1 2 3 (repeat the first row)
0 1 4 3
0 5 4 3

I ran your MWE and checked for duplicate rows, there were none. I am using 2019.2.0.dev0 version of dolfin.

Thank you for response. Did you run the code in parallel?

Sorry for the oversight, I did run it in parallel and it is resulting in multiple duplicate rows. I’m not able to figure out reason right now but this error of duplication is similar to what is asked here as well Code not running in parallel. It might be related to how did you install FEniCS.

Thanks for the reference. After reading the post you referred to, I think my case is different. I did check the rank number, it printed the correct rank. I found that the issue may be related to the following command:

gathered_connectivity = comm.gather(connectivity, root=0)

It didn’t give the correct collected connectivity.

You’re likely outputting the local connectivity on each process. If you want the global indices you can map those from the local indices.

Can you please recommend more specific functions to get the global indices?

Probably something along these lines

import dolfinx
from mpi4py import MPI
import numpy as np


mesh = dolfinx.mesh.create_unit_square(
	MPI.COMM_WORLD, 2, 2)
num_cells_local = mesh.topology.index_map(
	mesh.topology.dim).size_local
cells_local = mesh.topology.connectivity(
	mesh.topology.dim, 0).array.reshape(-1, 3)[:num_cells_local]
cells_global = mesh.topology.index_map(0).local_to_global(
	cells_local.ravel()).reshape(-1, 3)

cells_global = mesh.comm.gather(cells_global, root=0)
if mesh.comm.rank == 0:
	cells_global = np.concatenate(cells_global)
	print(f"num cells global: {cells_global.shape[0]}")
	print(f"cell index map size_global: {mesh.topology.index_map(mesh.topology.dim).size_global}")
	print(f"cells:\n{cells_global}")

It’s not entirely clear what you want this for. See, for example, gathering a mesh on a single process: