Strange side-effect on input mesh to dolfinx.mesh.refine

I encountered a strange side-effect on the input mesh to dolfinx.mesh.refine. Integration on the mesh yields different results before and after calling refine.
The code below reproduces the issue.

from mpi4py import MPI
from dolfinx import __version__, git_commit_hash
from dolfinx.mesh import refine,create_rectangle
from dolfinx.fem import assemble_scalar, form
from ufl import Measure
``
msh = create_rectangle(MPI.COMM_WORLD, [(0,0),(1,1)], [10,10])
msh.topology.create_connectivity(2,1)
dx = Measure("dx", domain=msh)
ds = Measure("ds", domain=msh)
print("before refine")
print("dx", assemble_scalar(form(1*dx)))
print("ds", assemble_scalar(form(1*ds)))
msh_refined = refine(msh, None)
print("after refine")
print("dx", assemble_scalar(form(1*dx)))
print("ds", assemble_scalar(form(1*ds)))
print("---")
print("msh:", msh)
print("msh_refined:", msh_refined)
print(__version__, git_commit_hash)

On my installation this produces the follwing output:

before refine
dx 1.0000000000000007
ds 4.000000000000002
after refine
dx 0.9999999999999862
ds 3.999999999999993
---
msh: <dolfinx.mesh.Mesh object at 0x7f4a9dd175c0>
msh_refined: <dolfinx.mesh.Mesh object at 0x7f4a9dd17c80>
0.8.0 565c057655caea8e2990dd36b0a50759b4af25f1
fenics-dolfinx            0.8.0           py312h66e9945_105    conda-forge
fenics-libdolfinx         0.8.0              h17dcdb5_105    conda-forge

The change is in the 13th and 14th digit of a double.
When computing the volume of a mesh, the determinant of a Jacobian is computed. This involves multiplication of the mesh coordinates with a set of basis functions. This can introduce a floating point error as the digits after the double representation is taken into account (the memory accessed after this digit can change over time, for instance when calling refine).

You can see this even more clearly if you change to single precision meshes:

from mpi4py import MPI
from dolfinx import __version__, git_commit_hash
from dolfinx.mesh import refine, CellType, create_unit_square
from dolfinx.fem import assemble_scalar, form
import numpy as np
from ufl import Measure

dtype=np.float32
def assemble_volume():
    return assemble_scalar(form(1*dx,dtype=dtype))

def assemble_surface():
    return assemble_scalar(form(1*ds, dtype=dtype))

N = 10
msh = create_unit_square(MPI.COMM_WORLD, N,N, cell_type=CellType.triangle, dtype=dtype)
msh.topology.create_connectivity(2,1)
#msh.topology.create_connectivity(1,2)
dx = Measure("dx", domain=msh)
ds = Measure("ds", domain=msh)


vol_pre = assemble_volume()
sur_pre = assemble_surface()

x_pre = msh.geometry.x.copy()
msh_refined = refine(msh, None)
x_post = msh.geometry.x.copy()

print(np.max(abs(x_pre-x_post)))

vol_post = assemble_volume()
sur_post = assemble_surface()

print(f"{abs(vol_post-vol_pre)}")
print(f"{abs(sur_post-sur_pre)}")


print("msh:", msh)
print("msh_refined:", msh_refined)
print(__version__, git_commit_hash)

yielding

1.0728836059570312e-05
1.430511474609375e-06

instead of

0.0
1.4432899320127035e-14
8.881784197001252e-15

when using np.float64 for the grid representation.

First of all thank you @dokken for your prompt and detailed response and also I have to apologize for not preparing my MWE with the necessary care.

I have prepared another example below. The problem occurs when integrating over subdomains. Integrals over the whole domain remain unaffected.

As in your “subdomains” tutorial I prepare a domain with two physical regions. Before the refinement of the original mesh the integral yields 0.5 in each case as expected. After calling refine the same integral yields wildly different results (0.122 and 0.124).

Info    : Meshing 1D...
Info    : [  0%] Meshing curve 1 (Line)
Info    : [ 20%] Meshing curve 2 (Line)
Info    : [ 30%] Meshing curve 3 (Line)
Info    : [ 50%] Meshing curve 4 (Line)
Info    : [ 60%] Meshing curve 5 (Line)
Info    : [ 80%] Meshing curve 6 (Line)
Info    : [ 90%] Meshing curve 7 (Line)
Info    : Done meshing 1D (Wall 0.000915329s, CPU 0.001274s)
Info    : Meshing 2D...
Info    : [  0%] Meshing surface 1 (Plane, Frontal-Delaunay)
Info    : [ 50%] Meshing surface 2 (Plane, Frontal-Delaunay)
Info    : Done meshing 2D (Wall 0.00665657s, CPU 0.006914s)
Info    : 103 nodes 218 elements
before refine
dx 0.9999999999999996
dx(1) 0.5
dx(2) 0.5
after refine
dx 1.0000000000000009
dx(1) 0.1224846896364883
dx(2) 0.12420233804951691
from mpi4py import MPI
import gmsh
import numpy as np
from dolfinx import __version__, git_commit_hash
from dolfinx.mesh import refine,create_rectangle
from dolfinx.fem import assemble_scalar, form
from dolfinx.io.gmshio import model_to_mesh
from ufl import Measure

# https://jsdokken.com/dolfinx-tutorial/chapter3/subdomains.html
gmsh.initialize()
proc = MPI.COMM_WORLD.rank
top_marker = 2
bottom_marker = 1
left_marker = 1
if proc == 0:
    # We create one rectangle for each subdomain
    gmsh.model.occ.addRectangle(0, 0, 0, 1, 0.5, tag=1)
    gmsh.model.occ.addRectangle(0, 0.5, 0, 1, 0.5, tag=2)
    # We fuse the two rectangles and keep the interface between them
    gmsh.model.occ.fragment([(2, 1)], [(2, 2)])
    gmsh.model.occ.synchronize()

    # Mark the top (2) and bottom (1) rectangle
    top, bottom = None, None
    for surface in gmsh.model.getEntities(dim=2):
        com = gmsh.model.occ.getCenterOfMass(surface[0], surface[1])
        if np.allclose(com, [0.5, 0.25, 0]):
            bottom = surface[1]
        else:
            top = surface[1]
    gmsh.model.addPhysicalGroup(2, [bottom], bottom_marker)
    gmsh.model.addPhysicalGroup(2, [top], top_marker)
    # Tag the left boundary
    left = []
    for line in gmsh.model.getEntities(dim=1):
        com = gmsh.model.occ.getCenterOfMass(line[0], line[1])
        if np.isclose(com[0], 0):
            left.append(line[1])
    gmsh.model.addPhysicalGroup(1, left, left_marker)
    gmsh.model.mesh.generate(2)
    
    msh, ct, ft = model_to_mesh(gmsh.model, MPI.COMM_WORLD, 0, gdim=2)
gmsh.finalize()

dx = Measure("dx", domain=msh, subdomain_data=ct)

print("before refine")
print("dx", assemble_scalar(form(1*dx)))
print("dx(1)", assemble_scalar(form(1*dx(1))))
print("dx(2)", assemble_scalar(form(1*dx(2))))
msh2 = refine(msh)
print("after refine")
print("dx", assemble_scalar(form(1*dx)))
print("dx(1)", assemble_scalar(form(1*dx(1))))
print("dx(2)", assemble_scalar(form(1*dx(2))))

Very strange indeed. I tried a slight modification:

from mpi4py import MPI
import numpy as np
from dolfinx.mesh import refine
from dolfinx.fem import assemble_scalar, form
from ufl import Measure

from dolfinx.mesh import create_unit_square, locate_entities, meshtags

msh = create_unit_square(MPI.COMM_WORLD, 10, 10)
left_cells = locate_entities(msh, msh.topology.dim, lambda x: x[0] < 0.5 + 1e-14)
num_cells_local = msh.topology.index_map(msh.topology.dim).size_local
values = np.full(num_cells_local, 1, dtype=np.int32)
values[left_cells] = 2
ct = meshtags(msh, msh.topology.dim, np.arange(num_cells_local, dtype=np.int32), values)
dx = Measure("dx", domain=msh, subdomain_data=ct)

v_form = form(1 * dx)
v1_form = form(1 * dx(1))
v2_form = form(1 * dx(2))

print("before refine")
print("dx", assemble_scalar(v_form))
print("dx(1)", assemble_scalar(v1_form))
print("dx(2)", assemble_scalar(v2_form))
print("old dx(1)", assemble_scalar(form(1 * dx(1))))
print("old dx(2)", assemble_scalar(form(1 * dx(2))))
msh.topology.create_entities(1)
msh2 = refine(msh)
print("after refine")
print("dx", assemble_scalar(v_form))
print("dx(1)", assemble_scalar(v1_form))
print("dx(2)", assemble_scalar(v2_form))
print("old dx(1)", assemble_scalar(form(1 * dx(1))))
print("old dx(2)", assemble_scalar(form(1 * dx(2))))

yielding:

before refine
dx 1.0000000000000007
dx(1) 0.5000000000000003
dx(2) 0.5000000000000003
old dx(1) 0.5000000000000003
old dx(2) 0.5000000000000003
after refine
dx 1.0000000000000007
dx(1) 0.5000000000000003
dx(2) 0.5000000000000003
old dx(1) 0.12500000000000008
old dx(2) 0.12500000000000006

Issue made at: [BUG]: Mesh refinement messes up cache · Issue #3366 · FEniCS/dolfinx · GitHub

Thanks for this MWE.
I’ve filed and issue and fixed it with: Fix refine constructor (currently overwriting ufl_cargo) by jorgensd · Pull Request #3367 · FEniCS/dolfinx · GitHub

Thanks a lot for the workaround and for the quick fix. Much appreciated!