Wrong cell area computed in parallel

I need area of every cell in the mesh (1st and 2nd order) for the mesh quality calcultion. A DG0 function is used to store area information for every cell in the mesh. However, neither fem.form nor numpy formation works properly in the halo region of each partition in parallel run. Wrong result is shown in the attached figure.

MWE:

# cell_surface.py
# Tested on DOLFINX 0.9.0
# $ mpirun -n 3 python cell_surface.py

import numpy as np
from mpi4py import MPI
import ufl
from dolfinx import fem, io, mesh

def compute_cell_surface_form(_mesh):

    V_dg = fem.functionspace(_mesh, ("DG", 0))
    v_dg = ufl.TestFunction(V_dg)

    peri_form = fem.form(
        1.0 * v_dg * ufl.ds +      
        1.0 * v_dg('+') * ufl.dS + 
        1.0 * v_dg('-') * ufl.dS   
    )
    
    peri_vec = fem.assemble_vector(peri_form)

    peri_f = fem.Function(V_dg, name="Cell_Surface_Area")
    peri_f.x.array[:] = peri_vec.array
    peri_f.x.scatter_forward()

    return peri_f


def compute_cell_surface_numpy(_mesh):

    tdim = _mesh.topology.dim
    V_dg0 = fem.functionspace(_mesh, ("DG", 0))
    result_func = fem.Function(V_dg0, name="Cell_Surface_Area")
    
    num_cells_local = _mesh.topology.index_map(tdim).size_local
    cell_to_vertex = _mesh.topology.connectivity(tdim, 0)
    geometry_coords = _mesh.geometry.x 
    
    local_values = np.zeros(num_cells_local)

    for i in range(num_cells_local):
        v_idx = cell_to_vertex.links(i)
        pts = geometry_coords[v_idx]
        
        total_measure = 0.0
        
        if tdim == 3:
            # 3D: Tetrahedron with 4 triangular faces
            faces = [(1,2,3), (0,2,3), (0,1,3), (0,1,2)]
            for f in faces:
                p1, p2, p3 = pts[f[0]], pts[f[1]], pts[f[2]]
                area = 0.5 * np.linalg.norm(np.cross(p2 - p1, p3 - p1))
                total_measure += area
                
        elif tdim == 2:
            # 2D: Triangle with 3 edges
            edges = [(0,1), (1,2), (2,0)]
            for e in edges:
                p1, p2 = pts[e[0]], pts[e[1]]
                length = np.linalg.norm(p2 - p1)
                total_measure += length
        
        local_values[i] = total_measure

    result_func.x.array[:num_cells_local] = local_values
    result_func.x.scatter_forward()

    return result_func

if __name__ == "__main__":
    _mesh = mesh.create_unit_square(MPI.COMM_WORLD, 20, 20)
    f1 = compute_cell_surface_form(_mesh)
    f2 = compute_cell_surface_numpy(_mesh)

    with io.XDMFFile(_mesh.comm, "cell_surface_form.xdmf", "w") as xdmf:
        xdmf.write_mesh(_mesh)
        xdmf.write_function(f1)

    with io.XDMFFile(_mesh.comm, "cell_surface_numpy.xdmf", "w") as xdmf:
        xdmf.write_mesh(_mesh)
        xdmf.write_function(f2)

The mistake lies in the fact that the vertices and geometry nodes do not map 1-1.
This can be fixed with the following minimal changes:

def compute_cell_surface_numpy(_mesh):

    tdim = _mesh.topology.dim
    V_dg0 = fem.functionspace(_mesh, ("DG", 0))
    result_func = fem.Function(V_dg0, name="Cell_Surface_Area")
    
    num_cells_local = _mesh.topology.index_map(tdim).size_local
    geometry_coords = _mesh.geometry.x 
    cell_to_vertex = mesh.entities_to_geometry(_mesh, tdim, np.arange(num_cells_local))


    local_values = np.zeros(num_cells_local)

    for i in range(num_cells_local):
        v_idx = cell_to_vertex[i]
        pts = geometry_coords[v_idx]
        
        total_measure = 0.0
        
        if tdim == 3:
            # 3D: Tetrahedron with 4 triangular faces
            faces = [(1,2,3), (0,2,3), (0,1,3), (0,1,2)]
            for f in faces:
                p1, p2, p3 = pts[f[0]], pts[f[1]], pts[f[2]]
                area = 0.5 * np.linalg.norm(np.cross(p2 - p1, p3 - p1))
                total_measure += area
                
        elif tdim == 2:
            # 2D: Triangle with 3 edges
            edges = [(0,1), (1,2), (2,0)]
            for e in edges:
                p1, p2 = pts[e[0]], pts[e[1]]
                length = np.linalg.norm(p2 - p1)
                total_measure += length
        
        local_values[i] = total_measure

    result_func.x.array[:num_cells_local] = local_values
    result_func.x.scatter_forward()

    return result_func

Furhtermore, your other function

is missing a scatter_reverse after assembling local contributions. Thus you should do:

def compute_cell_surface_form(_mesh):

    V_dg = fem.functionspace(_mesh, ("DG", 0))
    v_dg = ufl.TestFunction(V_dg)

    peri_form = fem.form(
        1.0 * v_dg * ufl.ds +      
        1.0 * v_dg('+') * ufl.dS + 
        1.0 * v_dg('-') * ufl.dS   
    )
    
    peri_vec = fem.assemble_vector(peri_form)
    peri_vec.scatter_reverse(la.InsertMode.add)

    peri_f = fem.Function(V_dg, name="Cell_Surface_Area")
    peri_f.x.array[:] = peri_vec.array
    peri_f.x.scatter_forward()

    return peri_f

This fixes both problems.

1 Like

Oh, that’s it. I have still being mixing up the concept between topology and geometry since beginning.

Indeed, it outputs values lower than the expected ones in the ghost region as the contribution of remote partition is not made. Unlike the common use of scatter_forward, how and when should we use scatter_reverse?

Scatter reverse is used to accumulate data from local assembly.
I.e. when one has called assemble_vector (and apply_lifting, these operations do no MPI communication. The results for each degree of freedom has to be gathered in the relevant process after these are called).

Oh, one thing to confirm: for the assembly of Ax=bproblem, we always have the callb.ghostUpdate(addv=PETSc.InsertMode.ADD, mode=PETSc.ScatterMode.REVERSE). Is peri_vec.scatter_reverse(la.InsertMode.add) equivalent as we normally do in ghost update of b?

Yes, one is using PETSc’s implementation of ghost updating while the other uses the dolfinx native implementation.