Transform dolfinx assemble matrix to numpy array

dokken, I’ll show you the picture after open “tag_triangle.xdmf”. The error should happened when transforming mesh file to xdmf file.

This is clearly a issue with either:
a) Pygmsh
b) Gmsh
c) Meshio
d) Paraview
as it works for me with:

  • pygmsh 6.0.8
  • gmsh 4.5.1
  • meshio 4.0.9
  • paraview version 5.7.0

dokken, it’s the following code questionable, but I cannot figure it out.

import pygmsh
import meshio
import numpy as np

filename = 'strip.msh'
mesh = meshio.read(
    filename,  # string, os.PathLike, or a buffer/open file
)
points, cells, cell_data, field_data = mesh.points, mesh.cells, mesh.cell_data, mesh.field_data

triangle_data = None
line_data = None
tri_cell_idx = []
line_cell_idx = []

for i, cell in enumerate(cells):
    if cell.type == 'triangle':
        print(i, 'triangle')
        tri_cell_idx.append(i)
        if triangle_data is None:
            triangle_data = cell.data
        else:
            triangle_data = np.vstack([triangle_data, cell.data])
    else:
        print(i, 'line')
        line_cell_idx.append(i)
        if line_data is None:
            line_data = cell.data
        else:
            line_data = np.vstack([line_data, cell.data])


meshio.xdmf.write('tag_triangle.xdmf', meshio.Mesh(
    points=points,
    cells={'triangle': triangle_data},
    cell_data={
        'triangle': [np.concatenate([cell_data['gmsh:physical'][i] for i in tri_cell_idx])]
    },
    field_data=field_data
))
meshio.xdmf.write("tag_all.xdmf", meshio.Mesh(
    points=points,
    cells={"line": line_data},
    cell_data={
        "line": [np.concatenate([cell_data['gmsh:physical'][i] for i in line_cell_idx])]
    }
))

dokken, can you check the physical boundary of the mesh is correct for your software version? You can run the below code. The first value should be 3.680109307299481.

import numpy as np
import pygmsh
import meshio
from dolfinx.io import XDMFFile
from dolfinx import (MPI, FacetNormal, Function, FunctionSpace, 
                     has_petsc_complex, DirichletBC, cpp)
from dolfinx.fem.assemble import assemble_scalar, assemble_matrix
from dolfinx.fem import locate_dofs_topological
from ufl import (TestFunctions, TrialFunctions, grad, inner, curl, split,
                  Measure, FiniteElement, dx, Dx, dot)
import dolfinx
from scipy.constants import c
from scipy.sparse import csr_matrix
from scipy.sparse.linalg import eigs
from slepc4py import SLEPc
from petsc4py import PETSc

with XDMFFile(MPI.comm_world, "tag_triangle.xdmf") as xdmf_infile:
    mesh = xdmf_infile.read_mesh(cpp.mesh.GhostMode.none)
    mvc_subdomain = xdmf_infile.read_mvc_size_t(mesh)
    # tag_info = xdmf_infile.read_information_int()

with XDMFFile(MPI.comm_world, "tag_all.xdmf") as xdmf_infile:
    mvc_boundaries = xdmf_infile.read_mvc_size_t(mesh)

mf_triangle = cpp.mesh.MeshFunctionSizet(mesh, mvc_subdomain, 0)
mf_line = cpp.mesh.MeshFunctionSizet(mesh, mvc_boundaries, 0)

V = FiniteElement('N1curl', mesh.ufl_cell(), 2)
Q = FiniteElement('P', mesh.ufl_cell(), 3)
W = FunctionSpace(mesh, V*Q)

S = FunctionSpace(mesh, ('DG', 0))
e_r = Function(S)
x = S.tabulate_dof_coordinates()

for i in range(x.shape[0]):
    if mf_triangle.values[i] == 2:
        e_r.vector.setValueLocal(i, 8.875)
    else:
        e_r.vector.setValueLocal(i, 1)

u, p = TrialFunctions(W)
v, q = TestFunctions(W)

f = 25e9
k0_sqr = (2*np.pi*f/c)**2

electric_wall = Function(W)

with electric_wall.vector.localForm() as bc_local:
    bc_local.set(0)

bndry_facets = np.where(mf_line.values == 1)[0]
bdofs = locate_dofs_topological(W, 1, bndry_facets)
bc= DirichletBC(electric_wall, bdofs)

theta_sqr = k0_sqr * 8.875

a = 1/theta_sqr*(inner(curl(u), curl(v)) - k0_sqr*e_r*inner(u, v))*dx 
b = inner(grad(p), v)*dx + inner(u, v)*dx + inner(grad(p), grad(q))*dx + inner(u, grad(q))*dx - k0_sqr*e_r*inner(p, q)*dx
c = b + a 

S = assemble_matrix(b, [bc])
T = assemble_matrix(c, [bc])

S.assemble()
S = S.realPart()

T.assemble()
T = T.realPart()

es = SLEPc.EPS(which='LR').create(comm=MPI.comm_world)
es.setDimensions(100)
es.setOperators(S, T)
es.setFromOptions()
es.solve()

vr, vi = S.getVecs()
lam = es.getEigenpair(0, vr, vi)
print(lam)

The script gives me:

(35.7681974852247-1.2044409474322886e-12j)

But as I said. This is clearly a meshing issue. I suggest either to raise an issue at pygmsh or meshio’s github. But then you need to figure out what is wrong with your mesh.

As I said in the previous post, to do so, write the working and failing mesh to file using: meshio.xdmf.write("filename.xdmf", mesh, data_format="XML") . Then you can compare to similar meshes in plaintext (use coarse meshes).

ok, I’ll try. Thank you.

Hi @dokken

For increased number of mesh elements, I am not able to convert dolfinx assembled matrix to np array.

import dolfinx
from dolfinx.fem import form, FunctionSpace, petsc
import ufl
from ufl import ds, dx, grad, inner

mesh = dolfinx.mesh.create_unit_square(MPI.COMM_WORLD, 500, 500)
V = FunctionSpace(mesh, ("Lagrange", 1))

u = ufl.TrialFunction(V)
v = ufl.TestFunction(V)
a = inner(grad(u), grad(v))*dx
A = petsc.assemble_matrix(form(a))
A.assemble()
B = A.convert("dense")
C = A.getDenseArray()
print(C)
Error                                     Traceback (most recent call last)
Cell In[47], line 14
     12 A = petsc.assemble_matrix(form(a))
     13 A.assemble()
---> 14 B = A.convert("dense")
     15 C = A.getDenseArray()
     16 print(C)

File PETSc/Mat.pyx:713, in petsc4py.PETSc.Mat.convert()

Error: error code 56
[0] MatConvert() line 4439 in ./src/mat/interface/matrix.c
[0] MatConvert_SeqAIJ_SeqDense() line 207 in ./src/mat/impls/dense/seq/dense.c
[0] MatSeqDenseSetPreallocation() line 3109 in ./src/mat/impls/dense/seq/dense.c
[0] MatSeqDenseSetPreallocation_SeqDense() line 3127 in ./src/mat/impls/dense/seq/dense.c
[0] PetscIntMultError() line 2364 in ./include/petscsys.h
[0] No support for this operation for this object type
[0] Product of two integers 251001 251001 overflow, either you have an invalidly large integer error in your code or you must ./configure PETSc with --with-64-bit-indices for the case you are running

The above way works only for small number of elements, but, fails for increased mesh elements. Can you suggest way to obtain np array from assembeled matrix?
Q2. When I use as_backend_type is not supported in dolfinx, as:

A = petsc.assemble_matrix(form(a))
A.assemble()
ai, aj, av= as_backend_type(A).mat().getValuesCSR()
Dhh_csr=csr_matrix((av, aj, ai))
Dhh=Dhh_csr.toarray()  
NameError                                 Traceback (most recent call last)
Cell In[48], line 14
     12 A = petsc.assemble_matrix(form(a))
     13 A.assemble()
---> 14 ai, aj, av= as_backend_type(A).mat().getValuesCSR()
     15 Dhh_csr=csr_matrix((av, aj, ai))
     16 Dhh=Dhh_csr.toarray()  

NameError: name 'as_backend_type' is not defined

Can you tell me how I can use as_backend type to obtain matrix in csr or numpy format?
Any help is greatly appreciated.

as_backend_type is not needed, the matrix is already of the petsc4py.Mat type.

It’s unsurprising that you are unable to convert a large matrix into a numpy array: a numpy array is dense. The error message is saying that the square of 251001 (= the number of elements of a 251001x251001 matrix) can’t be represented with your current 32 bit int type: the error message is offering you a solution, but if I were you I wouldn’t really try it, because even if you had an integer type that is able to count up to 251001**2, you’d likely be exceeding the RAM available on the machine when trying to fill each entry of that matrix.

1 Like

Thanks @francesco-ballarin
it is working for me with csr.