Hi Dolfinx community!
I am working on migrating a solver module for generalized field-splitting and preconditioning from FEniCS 2019 to DOLFINx (0.9.0). My workflow works well in serial, but has been giving me trouble in parallel when I set up a PCFIELDSPLIT
. I think I have narrowed it to the DOF indexing used to build the PETSc IS
objects for each block. I have compiled examples for how the DOFs are obtained from mixed spaces using both legacy and dolfinx.
MWE FEniCS 2019
import fenics as fe
import numpy as np
from mpi4py import MPI
comm = MPI.COMM_WORLD
rank = comm.rank
line_mesh = fe.IntervalMesh(MPI.COMM_WORLD, 5, 0.0, 1.0)
coords_on_line = np.array(line_mesh.coordinates())[:, 0]
print(f'[{rank}] Coordinates: {coords_on_line}')
PE_A = fe.FiniteElement('CG', line_mesh.ufl_cell(), 1)
PE_B = fe.FiniteElement('CG', line_mesh.ufl_cell(), 1)
ME = fe.MixedElement([PE_A, PE_B])
V = fe.FunctionSpace(line_mesh, ME)
V_A = V.sub(0)
V_B = V.sub(1)
dofs_A = V_A.dofmap().dofs()
dofs_B = V_B.dofmap().dofs()
print(f'[{rank}] dofs_A: {dofs_A}')
print(f'[{rank}] dofs_B: {dofs_B}')
In this post, dokken suggests using index_map.local_range'
to match the behavior of dofmap().dofs()
from legacy FEniCS. This gets me closer (the arrays are the same size!), but still doesn’t give me the global DOFs indices.
MWE DOLFINx (0.9.0)
from mpi4py import MPI
import numpy as np
import dolfinx
import basix
comm = MPI.COMM_WORLD
rank = comm.rank
line_mesh = dolfinx.mesh.create_interval(MPI.COMM_WORLD, 5, [0.0, 1.0])
coords_on_line = np.sort(np.array(line_mesh.geometry.x)[:, 0])
print(f'[{rank}] Coordinates: {coords_on_line}')
PE_A = basix.ufl.element('CG', line_mesh.basix_cell(), 1)
PE_B = basix.ufl.element('CG', line_mesh.basix_cell(), 1)
ME = basix.ufl.mixed_element([PE_A, PE_B])
V = dolfinx.fem.functionspace(line_mesh, ME)
V_A, map_A = V.sub(0).collapse()
V_B, map_B = V.sub(1).collapse()
print(f'[{rank}] dofs_A: {map_A}')
print(f'[{rank}] dofs_B: {map_B}')
dofs_A_local_range = np.arange(*V_A.dofmap.index_map.local_range)
dofs_B_local_range = np.arange(*V_B.dofmap.index_map.local_range)
print(f'[{rank}] dofs_A local_range: {dofs_A_local_range}')
print(f'[{rank}] dofs_B local_range: {dofs_B_local_range}')
Output from FEniCS 2019 (2 Processes, bracket indicates rank)
[1] Coordinates: [0. 0.2 0.4 0.6]
[0] Coordinates: [0.6 0.8 1. ]
[0] dofs_A: [0, 2]
[0] dofs_B: [1, 3]
[1] dofs_A: [4, 6, 8, 10]
[1] dofs_B: [5, 7, 9, 11]
Output from DOLFINx (2 Processes, bracket indicates rank)
[0] Coordinates: [0. 0.2 0.4 0.6 0.8]
[0] dofs_A: [0, 1, 4, 6, 8]
[0] dofs_B: [2, 3, 5, 7, 9]
[0] dofs_A local_range: [0 1 2 3]
[0] dofs_B local_range: [0 1 2 3]
[1] Coordinates: [0.4 0.6 0.8 1. ]
[1] dofs_A: [0, 2, 4, 6]
[1] dofs_B: [1, 3, 5, 7]
[1] dofs_A local_range: [4 5]
[1] dofs_B local_range: [4 5]
Thanks in advance for the help! Note: if the solution to “Global DOFs map from index_map.local_range in MPI:” doesn’t fix my field splitting issue, I’ll create a new topic with an example with PETSc.