Cell indices in subdomain case when running parallelly

Hello everyone,

Recently, I am running my code parallelly, and find a question about the cell indices in each processor. Let’s use the subdomain case in FEniCSx tutorial as an example Defining subdomains for different materials . Here is the code. The only thing I did, is just print the cells_0 and cells_1 out with which processor they are in.

import gmsh
import numpy as np
import pyvista

from dolfinx.fem import (Constant, dirichletbc, Function, FunctionSpace, assemble_scalar,
form, locate_dofs_geometrical, locate_dofs_topological)
from dolfinx.fem.petsc import LinearProblem
from dolfinx.io import XDMFFile
from dolfinx.mesh import create_unit_square, locate_entities
from dolfinx.plot import create_vtk_mesh

from ufl import (SpatialCoordinate, TestFunction, TrialFunction,
dx, grad, inner)

from mpi4py import MPI
from petsc4py.PETSc import ScalarType

mesh = create_unit_square(MPI.COMM_WORLD, 10, 10)
Q = FunctionSpace(mesh, (“DG”, 0))

def Omega_0(x):
return x[1] <= 0.5

def Omega_1(x):
return x[1] >= 0.5

kappa = Function(Q)
cells_0 = locate_entities(mesh, mesh.topology.dim, Omega_0)
cells_1 = locate_entities(mesh, mesh.topology.dim, Omega_1)

kappa.x.array[cells_0] = np.full_like(cells_0, 1, dtype=ScalarType)
kappa.x.array[cells_1] = np.full_like(cells_1, 0.1, dtype=ScalarType)

imap = kappa.function_space.dofmap.index_map
print(“global size”, imap.size_global)
print(“processor”, MPI.COMM_WORLD.rank, “local size”, imap.size_local)

V = FunctionSpace(mesh, (“CG”, 1))
u, v = TrialFunction(V), TestFunction(V)
a = inner(kappa*grad(u), grad(v)) * dx
x = SpatialCoordinate(mesh)
L = Constant(mesh, ScalarType(1)) * v * dx
dofs = locate_dofs_geometrical(V, lambda x: np.isclose(x[0], 0))
bcs = [dirichletbc(ScalarType(1), dofs, V)]

problem = LinearProblem(a, L, bcs=bcs, petsc_options={“ksp_type”: “preonly”, “pc_type”: “lu”})
uh = problem.solve()

pyvista.set_jupyter_backend(“pythreejs”)

p = pyvista.Plotter(window_size=[800, 800], shape=(1,2))

Filter out ghosted cells

num_cells_local = mesh.topology.index_map(mesh.topology.dim).size_local
marker = np.zeros(num_cells_local, dtype=np.int32)
print(“processor”, MPI.COMM_WORLD.rank, “cells 0”, cells_0)
print(“processor”, MPI.COMM_WORLD.rank, “cells 1”, cells_1)
print(“processor”, MPI.COMM_WORLD.rank, “number of local cells”, num_cells_local)
cells_0 = cells_0[cells_0<num_cells_local]
cells_1 = cells_1[cells_1<num_cells_local]
marker[cells_0] = 1
marker[cells_1] = 2
topology, cell_types, x = create_vtk_mesh(mesh, mesh.topology.dim, np.arange(num_cells_local, dtype=np.int32))
grid = pyvista.UnstructuredGrid(topology, cell_types, x)
grid.cell_data[“Marker”] = marker
grid.set_active_scalars(“Marker”)
p.subplot(0,0)
actor0 = p.add_mesh(grid, show_edges=True)
p.subplot(0,1)
grid_uh = pyvista.UnstructuredGrid(*create_vtk_mesh(V))
grid_uh.point_data[“u”] = uh.x.array.real
grid_uh.set_active_scalars(“u”)
actor1 = p.add_mesh(grid_uh, show_edges=True)
if not pyvista.OFF_SCREEN:
p.show()
else:
pyvista.start_xvfb()
figure = p.screenshot(“subdomains_structured.png”)

Here is the result. We can see that, on each processor, the local size of cess are both 100, but in actual the numbers of the output cells are both 109.

My question is that, what are these cells whose indices are larger than 100? Are them ghost cells? But I think in this case, there should not be ghost cells. Or if we delete them, will it affect the accurancy of our result?

Thanks a lot for your time and help sincerely!

Why do you think there should not be any ghost cells?
DOLFINx uses ghostmode=shared_facet by default (ref:dolfinx.mesh — DOLFINx 0.8.0.0 documentation)

If you turn of ghost mode=shared_facet, you will have no ghost cells, and thus a smaller array.

For most cases, it will not affect the result turning of ghost mode (it is mainly used for DG cases).

I haven’t looked at this for a while, but perhaps this little project can help you visualise the ghosts along with their indices.

1 Like

Got it. Thanks for your help sincerely!