Matrix-vector multiplication in parallel with assembled PETSc matrix

Hi everyone,

I’m using FEniCSx and I have a question about how to correctly perform a matrix-vector multiplication in parallel using MPI.

In serial mode, everything works as expected. I assemble a matrix K from a bilinear form and apply it to a vector, like this (where K has shape (3993,3993) and vector has shape (3993,)):

import dolfinx
import ufl
from mpi4py import MPI
from dolfinx import fem
from dolfinx.fem import Function, functionspace, form
from dolfinx.fem.petsc import assemble_matrix
from scipy.sparse import csr_matrix, diags
import numpy as np



mesh = dolfinx.mesh.create_box(MPI.COMM_WORLD, [np.array([0,0,0]), np.array([20, 20, 20])], [10,10,10])

V =   dolfinx.fem.functionspace(mesh, ("Lagrange", 1, (mesh.geometry.dim,)))


print("rank",mesh.comm.rank, "3x size_local",  3*V.dofmap.index_map.size_local)

vector = fem.Function(V)
vector.interpolate(lambda x: np.vstack((np.zeros(x.shape[1]),np.zeros(x.shape[1]),np.ones(x.shape[1]))))

v = ufl.TestFunction(V)
f_trial = ufl.TrialFunction(V)

a = ufl.inner(ufl.grad(f_trial), ufl.grad(v)) * ufl.dx
K = assemble_matrix(form(a))
K.assemble()

I, J, V = K.getValuesCSR()
K_csr = csr_matrix((V, J, I))
print("rank",mesh.comm.rank,"K csr shape", K_csr.shape)

result = K_csr @ vector.x.array

However, when I run this in parallel with mpirun -n 2 python script.py, the matrix K_csr seems to have different shapes on each rank, and I’m not sure how to interpret or use them correctly. For example, I observe:

For vector.x.array, I get:

rank 0: 3 x size_local = 1974
rank 1: 3 x size_local = 2019

For K_csr.shape I get:

rank 0: (1974, 3975)
rank 1: (2019, 3993)

I understand that the first number (e.g., 2019) corresponds to the number of local rows (without ghosts), but I don’t quite understand what the second dimension (number of columns) refers to, especially why it differs between ranks.

What is the correct way to perform a matrix-vector product with an assembled PETSc matrix and a vector in parallel in FEniCSx?

Thanks

For matrix vector multiplication with PETSc matrices, use K.mult

An example can be found at oasisx/src/oasisx/fracstep.py at 75b014a9c3a0a1e108f41af86c9359c5ff8ba29a · ComputationalPhysiology/oasisx · GitHub