Cell-wise assembly in v10.0.0

I am looking to assemble a form cell-wise to use in downstream codes in static condensation and cellwise post-processing. I have customized previous solutions ( Element-wise post-processing - #4 by dokken and Invert DG block diagonal matrix - #4 by dokken ), but I have an issue in making it work for v10.0.0.

I get the error

Traceback (most recent call last):
  File "/home/lmolel/work/ssb/lassemble-v10.0.0.py", line 94, in <module>
    A = assembler.assemble_matrix(2)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lmolel/work/ssb/lassemble-v10.0.0.py", line 77, in assemble_matrix
    self.kernel(ffi_fb(A_local), ffi_fb(coeffs), ffi_fb(self.consts), ffi_fb(geometry),
TypeError: 'void(*)(double *, double *, double *, double *, int *, uint8_t *, void *)' expects 7 arguments, got 6

Which can be reproduced using

import numpy as np
import ufl
from mpi4py import MPI
from petsc4py.PETSc import ScalarType
from ufl import  dx, grad, inner, ds
from dolfinx import fem, mesh
import cffi


_type_to_offset_index = {fem.IntegralType.cell: 0, fem.IntegralType.exterior_facet: 1}


class LocalAssembler():

    def __init__(self, form, integral_type: fem.IntegralType =fem.IntegralType.cell):
        self.form = fem.form(form)
        self.integral_type = integral_type
        self.update_coefficients()
        self.update_constants()

        # subdomain_ids = self.form.integral_ids(integral_type)
        # assert(len(subdomain_ids) == 1)
        # assert(subdomain_ids[0] == -1)
        is_complex = np.issubdtype(ScalarType, np.complexfloating)
        nptype = "complex128" if is_complex else "float64"
        o_s = self.form.ufcx_form.form_integral_offsets[_type_to_offset_index[integral_type]]
        o_e = self.form.ufcx_form.form_integral_offsets[_type_to_offset_index[integral_type]+1]
        assert o_e - o_s == 1

        self.kernel = getattr(self.form.ufcx_form.form_integrals[o_s], f"tabulate_tensor_{nptype}")
        self.active_cells = self.form._cpp_object.domains(integral_type, 0)
        assert len(self.form.function_spaces) == 2
        self.local_shape = [0,0]
        for i, V in enumerate(self.form.function_spaces):
            self.local_shape[i] = V.dofmap.dof_layout.block_size * V.dofmap.dof_layout.num_dofs

        e0 = self.form.function_spaces[0].element
        e1 = self.form.function_spaces[1].element
        needs_transformation_data = e0.needs_dof_transformations or e1.needs_dof_transformations or \
            self.form._cpp_object.needs_facet_permutations
        if needs_transformation_data:
            raise NotImplementedError("Dof transformations not implemented")

        self.ffi = cffi.FFI()
        V = self.form.function_spaces[0]
        self.x_dofs = V.mesh.geometry.dofmap

    def update_coefficients(self):
        self.coeffs = fem.assemble.pack_coefficients(self.form)[(self.integral_type, 0)]

    def update_constants(self):
        self.consts = fem.assemble.pack_constants(self.form)

    def update(self):
        self.update_coefficients()
        self.update_constants()

    def assemble_matrix(self, i:int, local_index:int = 0):

        x = self.form.function_spaces[0].mesh.geometry.x
        x_dofs = self.x_dofs[i]
        geometry = np.zeros((len(x_dofs), 3), dtype=np.float64)
        geometry[:, :] = x[x_dofs]

        A_local = np.zeros((self.local_shape[0], self.local_shape[0]), dtype=ScalarType)
        facet_index = np.array([local_index], dtype=np.intc)
        facet_perm = np.zeros(0, dtype=np.uint8)
        if self.coeffs.shape == (0, 0):
            coeffs = np.zeros(0, dtype=ScalarType)
        else:
            coeffs = self.coeffs[i,:]
        ffi_fb = self.ffi.from_buffer
        self.kernel(ffi_fb(A_local), ffi_fb(coeffs), ffi_fb(self.consts), ffi_fb(geometry),
               ffi_fb(facet_index), ffi_fb(facet_perm))
        return A_local


msh = mesh.create_unit_square(MPI.COMM_WORLD, 3, 3,
                 mesh.CellType.triangle)

V = fem.functionspace(msh, ("Lagrange", 1))

u = ufl.TrialFunction(V)
v = ufl.TestFunction(V)

assembler = LocalAssembler(inner(grad(u), grad(v)) * dx)

for cell in range(msh.topology.index_map(msh.topology.dim).size_local):
    if cell == 2:
        A = assembler.assemble_matrix(2)
        print(A)



facet_assembler = LocalAssembler(inner(u, v) *ds, fem.IntegralType.exterior_facet)

for cell in range(msh.topology.index_map(msh.topology.dim).size_local):
    if cell == 2:
        A = facet_assembler.assemble_matrix(2, local_index = 0)
        print(A)

This is due to a slight change in the API, where a void ptr is passed as the last argument, to make it possible to pass arbitrary data to (custom) kernels. (Ref: Add void* to tabulate_tensor kernel by sclaus2 · Pull Request #753 · FEniCS/ffcx · GitHub)
By changing

to

        self.kernel(ffi_fb(A_local), ffi_fb(coeffs), ffi_fb(self.consts), ffi_fb(geometry),
               ffi_fb(facet_index), ffi_fb(facet_perm), self.ffi.NULL,)

the code runs.

when I modify the demos in the top of this post with the modifications given by you, why I get

Traceback (most recent call last):
  File "/home/a/PycharmProjects/Explode/main2.py", line 91, in <module>
    A = assembler.assemble_matrix(2)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/a/PycharmProjects/Explode/main2.py", line 74, in assemble_matrix
    self.kernel(ffi_fb(A_local), ffi_fb(coeffs), ffi_fb(self.consts), ffi_fb(geometry),
TypeError: implicit cast from 'char *' to a different pointer type: (check that the types are as you expect; use an explicit ffi.cast() if they are correct)

Seems to be an issue when you use cffi>=2.0.0.
For now I would downgrade cffi. I will see if I can figure out why it happens.

If you change self.kernel from

to

 self.kernel(
            self.ffi.cast("double *", ffi_fb(A_local)),
            self.ffi.cast("double *", ffi_fb(coeffs)),
            self.ffi.cast("double *", ffi_fb(self.consts)),
            self.ffi.cast("double *", ffi_fb(geometry)),
            self.ffi.cast("int *", ffi_fb(facet_index)),
            self.ffi.cast("uint8_t *", ffi_fb(facet_perm)),
            self.ffi.NULL,
        )

the code should run with cffi>=2.0.0

1 Like

Thanks you very much, also, where to find other demos regarding some very advanced features of FEnicsx0.10.0? demos on official site seems don’t have demos related to variational multiscale method

As FEniCS is a general purpose finite element software, with about ~6 active developers, we do not have the knowledge or spare time to make demos that cover every PDE or application that exists.
Thankfully, there are many users that make their code open source, such as @kamensky for Legacy FEniCS: GitHub - david-kamensky/VarMINT: Variational Multiscale Incompressible Navier--Stokes Toolkit
and @francesco-ballarin: Redirecting