Hello All,
I am trying to perform the linear algebric operation Ax=b, where A matrix (denoted as Kf here) is obtained from the bilinear part of Variation formulation, and I have b (denoted as phirhs_petsc here) vector transformed to PETSc format.
Initialization
from mpi4py import MPI
from petsc4py import PETSc
import numpy as np
from dolfinx.la import InsertMode
from dolfinx.fem import (Function, FunctionSpace, form)
from dolfinx.fem.petsc import (apply_lifting, assemble_matrix, assemble_vector, set_bc )
from dolfinx.mesh import (CellType, GhostMode, create_box, create_rectangle, create_unit_square, locate_entities_boundary, meshtags)
from ufl import dx, grad, dot, inner, SpatialCoordinate, Measure, inv
from ufl import TrialFunction, TestFunction
from scipy.sparse import csr_matrix,lil_matrix
def mpi_print(s):
print(f"Rank {comm.rank}: {s}")
dtype = PETSc.ScalarType # type: ignore
comm = MPI.COMM_WORLD
rank = comm.rank
# Mesh (Either regular or Gmesh)
domain = create_unit_square(MPI.COMM_WORLD, 2, 2, ghost_mode=GhostMode.shared_facet)
tdim = domain.topology.dim
# Assuming `mesh` is the mesh created in the code
num_elements = domain.topology.index_map(tdim).size_local
num_nodes = domain.topology.index_map(0).size_local
# Summing up the counts from all processes in MPI_COMM_WORLD
num_elements_global = MPI.COMM_WORLD.allreduce(num_elements, op=MPI.SUM)
num_nodes_global = MPI.COMM_WORLD.allreduce(num_nodes, op=MPI.SUM)
num_cells = domain.topology.index_map(tdim).size_local + domain.topology.index_map(tdim).num_ghosts
num_vertices = domain.topology.index_map(0).size_local + domain.topology.index_map(0).num_ghosts
#Fespace
Vh1 = FunctionSpace(domain, ("CG", 1))
Vh1dc = FunctionSpace(domain, ("DG", 1))
Vh0 = FunctionSpace(domain, ("DG", 0))
Transforming the local numpy array to PETSc array
num_dofs_local = Vh1.dofmap.index_map.size_local
phirhs = Trf*rhoO
phirhsNO = Function(Vh1)
phirhsNO = phirhs[:num_dofs_local]
# Convert numpy array to PETSc vector
phirhs_petsc = PETSc.Vec().createWithArray(phirhsNO)
mpi_print(f"Size of phirhs vector: {len(phirhs_petsc.array)}")
Steps to validate the size of Kf matrix compatible with phirhs_petsc vector locally:
uf, vf = TrialFunction(Vh1), TestFunction(Vh1)
a = form(inner(grad(uf), grad(vf)) * dx + uf * vf * dx)
bc = []
Kf = assemble_matrix(a, bc)
Kf.assemble()
# Construct CSR matrix from PETSc matrix
assert isinstance(Kf, PETSc.Mat)
ai, aj, av = Kf.getValuesCSR()
Asp = csr_matrix((av, aj, ai))
mpi_print(f"Size of Kf in sparse matrix form: {Asp.shape}")
To test the size the Kf, I converted to sparse matrix as in Transform dolfinx assemble matrix to numpy array
Results from previous 2 sub codes for the 2 processors
Rank 0: Size of phirhs vector: 4
Rank 0: Size of Kf in sparse matrix form: (4, 8)
Rank 1: Size of phirhs vector: 5
Rank 1: Size of Kf in sparse matrix form: (5, 9)
And when I convert Kf to dense matrix to validate the size, they get spread on global mesh when I run run on MPI
# Construct dense matrix from PETSc matrix
C = Kf.convert('dense')
C.getDenseArray()
mpi_print(f"Size of Kf in dense matrix form: {C.size}")
Result from 2 processors
Rank 0: Size of phirhs vector: 4
Rank 0: Size of Kf in sparse matrix form: (4, 8)
Rank 0: Size of Kf in dense matrix form: (9, 9)
Rank 1: Size of phirhs vector: 5
Rank 1: Size of Kf in sparse matrix form: (5, 9)
Rank 1: Size of Kf in dense matrix form: (9, 9)
How can I make sure there is compatibility for matrix vector multiplication using MPI, as I am getting the satisfactory results for sequential?
Thanks