Global DOFs map from index_map.local_range in MPI

Hi Dolfinx community!

I am working on migrating a solver module for generalized field-splitting and preconditioning from FEniCS 2019 to DOLFINx (0.9.0). My workflow works well in serial, but has been giving me trouble in parallel when I set up a PCFIELDSPLIT. I think I have narrowed it to the DOF indexing used to build the PETSc IS objects for each block. I have compiled examples for how the DOFs are obtained from mixed spaces using both legacy and dolfinx.

MWE FEniCS 2019

import fenics as fe 
import numpy as np
from mpi4py import MPI


comm = MPI.COMM_WORLD
rank = comm.rank
line_mesh = fe.IntervalMesh(MPI.COMM_WORLD, 5, 0.0, 1.0)

coords_on_line = np.array(line_mesh.coordinates())[:, 0]
print(f'[{rank}] Coordinates: {coords_on_line}')

PE_A = fe.FiniteElement('CG', line_mesh.ufl_cell(), 1)
PE_B = fe.FiniteElement('CG', line_mesh.ufl_cell(), 1)
ME = fe.MixedElement([PE_A, PE_B])
V = fe.FunctionSpace(line_mesh, ME)

V_A = V.sub(0)
V_B = V.sub(1)

dofs_A = V_A.dofmap().dofs()
dofs_B = V_B.dofmap().dofs()

print(f'[{rank}] dofs_A: {dofs_A}')
print(f'[{rank}] dofs_B: {dofs_B}')  

In this post, dokken suggests using index_map.local_range' to match the behavior of dofmap().dofs()from legacy FEniCS. This gets me closer (the arrays are the same size!), but still doesn’t give me the global DOFs indices.

MWE DOLFINx (0.9.0)

from mpi4py import MPI
import numpy as np
import dolfinx 
import basix

comm = MPI.COMM_WORLD
rank = comm.rank
line_mesh = dolfinx.mesh.create_interval(MPI.COMM_WORLD, 5, [0.0, 1.0])

coords_on_line = np.sort(np.array(line_mesh.geometry.x)[:, 0])
print(f'[{rank}] Coordinates: {coords_on_line}')

PE_A = basix.ufl.element('CG', line_mesh.basix_cell(), 1)
PE_B = basix.ufl.element('CG', line_mesh.basix_cell(), 1)
ME = basix.ufl.mixed_element([PE_A, PE_B])
V = dolfinx.fem.functionspace(line_mesh, ME)
V_A, map_A = V.sub(0).collapse()
V_B, map_B = V.sub(1).collapse()

print(f'[{rank}] dofs_A: {map_A}')
print(f'[{rank}] dofs_B: {map_B}')

dofs_A_local_range = np.arange(*V_A.dofmap.index_map.local_range)
dofs_B_local_range = np.arange(*V_B.dofmap.index_map.local_range)

print(f'[{rank}] dofs_A local_range: {dofs_A_local_range}')
print(f'[{rank}] dofs_B local_range: {dofs_B_local_range}')

Output from FEniCS 2019 (2 Processes, bracket indicates rank)


[1] Coordinates: [0.  0.2 0.4 0.6]
[0] Coordinates: [0.6 0.8 1. ]
[0] dofs_A: [0, 2]
[0] dofs_B: [1, 3]
[1] dofs_A: [4, 6, 8, 10]
[1] dofs_B: [5, 7, 9, 11]

Output from DOLFINx (2 Processes, bracket indicates rank)

[0] Coordinates: [0.  0.2 0.4 0.6 0.8]
[0] dofs_A: [0, 1, 4, 6, 8]
[0] dofs_B: [2, 3, 5, 7, 9]
[0] dofs_A local_range: [0 1 2 3]
[0] dofs_B local_range: [0 1 2 3]
[1] Coordinates: [0.4 0.6 0.8 1. ]
[1] dofs_A: [0, 2, 4, 6]
[1] dofs_B: [1, 3, 5, 7]
[1] dofs_A local_range: [4 5]
[1] dofs_B local_range: [4 5]

Thanks in advance for the help! Note: if the solution to “Global DOFs map from index_map.local_range in MPI:” doesn’t fix my field splitting issue, I’ll create a new topic with an example with PETSc.

Is this what you’re looking for?

If you just want the global indices of all (including ghost dofs) in the sub-space, the easiest is to do the following:

from mpi4py import MPI
import numpy as np
import dolfinx
import basix

comm = MPI.COMM_WORLD
rank = comm.rank
line_mesh = dolfinx.mesh.create_interval(MPI.COMM_WORLD, 5, [0.0, 1.0])

coords_on_line = np.sort(np.array(line_mesh.geometry.x)[:, 0])
print(f"[{rank}] Coordinates: {coords_on_line}")

PE_A = basix.ufl.element("CG", line_mesh.basix_cell(), 1)
PE_B = basix.ufl.element("CG", line_mesh.basix_cell(), 1)
ME = basix.ufl.mixed_element([PE_A, PE_B])
V = dolfinx.fem.functionspace(line_mesh, ME)
V_A, map_A = V.sub(0).collapse()
V_B, map_B = V.sub(1).collapse()

map_A = np.asarray(map_A)
map_B = np.asarray(map_B)
print(f"[{rank}] dofs_A: {map_A}")
print(f"[{rank}] dofs_B: {map_B}")

assert V.dofmap.index_map_bs == 1
V_A_global = V.dofmap.index_map.local_to_global(map_A)
V_B_global = V.dofmap.index_map.local_to_global(map_B)

print(f"[{rank}] {V_A_global=}, {V_B_global=}")

which returns

[1] Coordinates: [0.  0.2 0.4 0.6 0.8]
[1] dofs_A: [0 1 4 6 8]
[1] dofs_B: [2 3 5 7 9]
[1] V_A_global=array([ 6,  7, 10,  0,  1], dtype=int64), V_B_global=array([ 8,  9, 11,  2,  3], dtype=int64)
[0] Coordinates: [0.4 0.6 0.8 1. ]
[0] dofs_A: [0 1 4 6]
[0] dofs_B: [2 3 5 7]
[0] V_A_global=array([ 0,  1,  4, 10], dtype=int64), V_B_global=array([ 2,  3,  5, 11], dtype=int64)

If you only want owned entities, you would strip the map you send to local to global, i.e.

local_A = V_A.dofmap.index_map.size_local * V_A.dofmap.index_map_bs
map_A = np.asarray(map_A)[:local_A]

local_B = V_B.dofmap.index_map.size_local * V_B.dofmap.index_map_bs
map_B = np.asarray(map_B)[:local_B]

and call local_to_global of map_A/B.

This is it! For those looking, the solution is in post 4 of Stein’s thread. Dokken’s solution additionally works. Thanks all!