DofMap and ghost values

Hello dolfinx users !

I have two dolfinx.Function objects a and b. I would like to use it as columns for two PETSc matrices A and B with a cutoff, in parallel. This requires knowing which processor owns which degree of freedom.

I have on hand the FunctionSpace object V0 that was used for both Function objects, which is a collapsed subspace of a larger space V. I want A to operate on V0 and B on V.

Well one seems significantly easier than the other :

from petsc4py import PETSc as pet
import numpy as np

V0, map0 = V.sub(0).collapse()
map0=np.array(map0,dtype='int32')
l=V0.dofmap.index_map.size_local
I,J=V0.dofmap.index_map.local_range
map=np.arange(I,J,dtype='int32')

A=pet.Mat().create(comm=comm); A.setSizes([[n_local,n],[m_local,m]])
B=pet.Mat().create(comm=comm); B.setSizes([[n_local,n],[n_local,n]])
for M in (A,B):
	M.setUp()
	M.setOption(M.Option.IGNORE_ZERO_ENTRIES, 1)
aa=a.x.array
msk=np.abs(aa)>1e-4
A.setValues([0],map0[msk],aa[msk])
ba=b.x.array[:l]
msk=np.abs(ba)>1e-4
B.setValues([0],map[msk],ba[msk])

I find it surprisingly hard to get a map containing both local and ghosts indexes to link to Function.x.array, whereas collapse hands one over effortlessly. I’m not even sure I have it right…

Dear @hawkspar,

Your code is not complete, i.e several important definitions are missing for anyone to be able to reproduce your issue. Especially the choice of function spaces are crucial.

Please revise your example.

Here is an example that creates the local_to_global map containing both owned and unowned dofs (unrolled for block size).

import dolfinx
from mpi4py import MPI
import ufl
import numpy as np
domain = dolfinx.mesh.create_unit_square(MPI.COMM_WORLD, 2, 2)
el0 = ufl.FiniteElement("Lagrange", domain.ufl_cell(), 1)
el2 = ufl.VectorElement("Lagrange", domain.ufl_cell(), 2)
el = ufl.MixedElement([el0, el2])
V = dolfinx.fem.FunctionSpace(domain, el)

u = dolfinx.fem.Function(V)

# Create local map
local_range = V.dofmap.index_map.local_range
num_dofs_local = V.dofmap.index_map.size_local
local_dof_map = np.arange(num_dofs_local, dtype=np.int64)
local_dof_map += local_range[0]


# Get ghost indices
num_ghosts = V.dofmap.index_map.num_ghosts
ghosts = V.dofmap.index_map.ghosts
ghost_ownership = V.dofmap.index_map.owners

# Unroll for block size
bs = V.dofmap.index_map_bs
local_to_global = np.zeros((num_dofs_local+num_ghosts)*bs, dtype=np.int64)
owners = np.zeros_like(local_to_global)
for i in range(bs):
    for j in range(num_dofs_local):
        local_to_global[j*bs+i] = local_dof_map[j]*bs+i
        owners[j*bs+i] = domain.comm.rank

    for j in range(num_ghosts):
        local_to_global[num_dofs_local*bs+ j*bs + i] = ghosts[j]*bs+i
        owners[num_dofs_local*bs + j*bs+i] = ghost_ownership[j]

import time
time.sleep(domain.comm.rank)
print(f"Rank {domain.comm.rank}, Block size {bs} Num local dofs {num_dofs_local*bs} Num ghosts {num_ghosts*bs}", local_to_global, owners)

yielding:

mpirun -n 3 python3 script.py 
Rank 0, Block size 1 Num local dofs 19 Num ghosts 17 [ 0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 48 49 50 24 25
 26 27 28 31 32 19 20 21 22 23 57 58] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 2 1 1 1 1 1 1 1 1 1 1 1 1 2 2]
Rank 1, Block size 1 Num local dofs 14 Num ghosts 31 [19 20 21 22 23 24 25 26 27 28 29 30 31 32 48  1 49 50  4  5 57 58  6  7
 33 36 37 53 54  0  2  3 10 11 35 40 41 51 52 44 45  8  9 55 56] [1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 0 2 2 0 0 2 2 0 0 2 2 2 2 2 0 0 0 0 0 2 2 2
 2 2 2 2 0 0 2 2]
Rank 2, Block size 1 Num local dofs 26 Num ghosts 17 [33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56
 57 58 19 20 21 24 25 26 27 28 29 30  1  4  5 22 23  6  7] [2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0
 0 0 1 1 0 0]

Thank you @dokken it’s nice to have a complete snippet. It seems an involved process though, especially compared to a one-liner collapse.

I’m not sure it was even relevant to create a topic here - my point was more “this is harder than expected, am I doing this right ?” than a bug. My point stands with any FunctionSpace and FiniteElement object, really - but in this case :

import dolfinx as dfx
from mpi4py.MPI import COMM_WORLD as comm
mesh = dfx.mesh.create_unit_square(comm, 8, 8, dfx.mesh.CellType.quadrilateral)
FE=ufl.VectorElement("CG",mesh.ufl_cell(),2,3)
V=dfx.fem.FunctionSpace(mesh,FE)

To me a major selling point of the dolfinx library is the fact it’s in python so it allows the user to leverage a bunch of other powerful libraries. This is harder to do if access to data, namely the Function.x.array and its dof map is impaired. I think the local_to_global map you supplied could be readily available as a property of dolfinx.fem.Function, but I don’t know how to contribute it…

Thank you for your time, I’m sure we both have better things to do.

The code above is 10 lines of code, that can be for instance sped up with numba if you find it slow.

DOLFINx is a core library, it provides you with all the tools needed to interface with other libraries. As all the code above is leveraging numpy arrays it makes it adaptable for other libraries in Python.

As what you want is fairly custom, i.e. a map from the local indices (including ghosts) and ownership. Having the full array of ownership like the above:

is usually not necessary, as this is trivial information extracted from

In DOLFINx we have an explicit split between the local indices and ghost indices (as opposed to PETSc). We also have optimizations for block size (vector and tensor spaces), which has to be unrolled if accessing the actual degrees of freedom.