ValueError: vector::reserve when computing the points on the boundary of a domain

Dear all,

I am trying to compute the values of a function on the top boundary of my domain by means of the following function, where I have used some snippets of code provided in the discourse group for previous questions in this topic. This function is a method of an object PC in which I define the whole problem.

def compute_values_on_top(self, u, p):
        domain, facet_markers = PC.domain, PC.facet_markers

        boundary_facets = facet_markers.indices[facet_markers.values == PC.facets_top_tag]
        bb_tree = dolfinx.geometry.bb_tree(domain, domain.topology.dim, padding=10)

        domain.topology.create_connectivity(domain.topology.dim-1, domain.topology.dim)
        boundary_vertices = dolfinx.mesh.compute_incident_entities(domain.topology, boundary_facets, domain.topology.dim-1, 0)
    
        boundary_coordinates = []
        for ind in boundary_vertices:
            boundary_coordinates.append(domain.geometry.x[ind].tolist())

        l = list(np.argsort(np.array(boundary_coordinates)[:,0]).astype(np.int32))
        boundary_coordinates = np.array(boundary_coordinates)[l]

        # Points on boundary

        points = np.array(boundary_coordinates).transpose()

        cells = []
        points_on_proc = []
        cell_candidates = dolfinx.geometry.compute_collisions_points(bb_tree, points.T)
        colliding_cells = dolfinx.geometry.compute_colliding_cells(domain, cell_candidates, points.T)
        for i, point in enumerate(points.T):
            if len(colliding_cells.links(i)) > 0:
                points_on_proc.append(point)
                cells.append(colliding_cells.links(i)[0])

# The function continues but the rest of the code is non relevant and has been ommitted.

I know this is not a MWE but it is difficult to provide one in this case since the problem I am facing deals with a domain whose upper boundary is moving and the complete code to reproduce it is very long. In fact, I have removed some non relevant part of the code from this function to make it more readable.

When I run the whole code, the following error is thrown after 50 time iterations using a very thin mesh. I do not see where the problem is as I can run the code perfectly for the first 49 time iterations. Moreover, if I use coarse meshes I can run the code during any number of time iterations.

Plant_cell_model.py", line 773, in time_iterator
    values_inter_bnd = PC.compute_values_on_top(u, p)
  File "/data/Nextcloud/Universidad/Investigacion/Papers/Flow plant cell model/Plant_cell_model/src/Plant_cell_model.py", line 458, in compute_values_on_top
    cell_candidates = dolfinx.geometry.compute_collisions_points(bb_tree, points.T)
  File "/home/danielacos/micromamba/envs/fenicsx/lib/python3.13/site-packages/dolfinx/geometry.py", line 175, in compute_collisions_points
    return _cpp.geometry.compute_collisions_points(tree._cpp_object, x)
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^
ValueError: vector::reserve

I have thought that, since the upper part of the domain becomes more complex over time, the number of points that I have on the top boundary increases and it may cause some issues with the C++ backend of FEniCSx.

For reference, I am using FEniCSx v.0.9 and here is a picture of the solution at the last time step computed.

As you haven’t provided enough code to reproduce the example, my reply will be quite general.

Could you print the shape of

Before calling

The output of points.T.shape is (2619, 3).

In any case, now I have detected the problem and fixed it using a smaller padding, i.e. 10^{-14} in

However, I think I do not really understand how this padding value may affect the results of

Can you give me some hint on this?

Thanks for your help!

If you set padding to 10, you allow for searches through a large amount of cells (each cell is padded with ±10 in each coordinate direction).

This means that if your mesh is say of the range of [-1,1] every cell will be within this collision box, which I guess can cause some issues in the bounding box geometry search.
The padding should usually not be much more than the cell-diameter.

Hi @dokken! Regarding our previous discussion on the padding effect on the results, I have found one difficulty that I would need to solve for my problem.

In my case, as I am working with a moving boundary, I am trying to interpolate the functions from \Omega(t^m) to the moved domain in the following time step \Omega(t^{m+1}). Here you can find a MWE of what I want to do:

import dolfinx
from dolfinx import mesh
from dolfinx import fem
from mpi4py import MPI
import numpy as np
import pyvista
from dolfinx import plot
from dolfinx.io import gmshio
import gmsh

comm = MPI.COMM_WORLD
rank = comm.Get_rank()
pyvista.OFF_SCREEN = True
np.random.seed(0)

nx = 40
padding = 1e-14

domain = mesh.create_rectangle(comm, points=((0, 0), (1, 1)), n=(nx, nx), cell_type=mesh.CellType.triangle, diagonal=dolfinx.cpp.mesh.DiagonalType.right_left)

topology, cell_types, geometry = plot.vtk_mesh(domain, domain.topology.dim)
grid = pyvista.UnstructuredGrid(topology, cell_types, geometry)
plotter = pyvista.Plotter()
plotter.add_mesh(grid, show_edges=True, color="white")
plotter.view_xy()
plotter.screenshot("mesh.png", transparent_background=True)
plotter.close()

V = fem.functionspace(domain,["CG",1])
u = fem.Function(V)
def u_init(x):
    return np.sin(x[1]*np.pi/2) + np.sin(x[0]*np.pi/2) 
u.interpolate(u_init)

topology, cell_types, geometry = plot.vtk_mesh(V)
grid = pyvista.UnstructuredGrid(topology, cell_types, geometry)
grid.point_data["u"] = u.x.array

grid.set_active_scalars("u")
plotter = pyvista.Plotter()
plotter.add_mesh(grid, show_edges=False, show_scalar_bar=True)
plotter.view_xy()
figure = plotter.screenshot(f"./u.png", transparent_background=True)
plotter.close()

#
# GMSH
#
gmsh.initialize()

gmsh.model.occ.addPoint(0,0,0,tag=1)
gmsh.model.occ.addPoint(0,1,0,tag=2)
gmsh.model.occ.addPoint(1,0,0,tag=3)
gmsh.model.occ.addPoint(1,1,0,tag=4)

gmsh.model.occ.addLine(4,3,tag=1)
gmsh.model.occ.addLine(3,1,tag=2)
gmsh.model.occ.addLine(1,2,tag=3)

nump = 10
x = []
y = np.random.rand(nump-2)/8+1
for i in range(nump-2):
    x.append((i+1) * 1/nump)
tag_vec = [2]
for i in range(len(x)):
    gmsh.model.occ.addPoint(x[i],y[i],0,tag=i+5)
    tag_vec.append(i+5)
tag_vec.append(4)

gmsh.model.occ.addSpline(tag_vec,tag=4)

gmsh.model.occ.addCurveLoop([1,2,3,4],tag=1)

gmsh.model.occ.addPlaneSurface([1])

gmsh.model.occ.synchronize()

gdim=2

gmsh.model.addPhysicalGroup(gdim,[1],tag=1,name="interior")
gmsh.model.addPhysicalGroup(1,[1,2,3],tag=2,name="rectangle_walls")
gmsh.model.addPhysicalGroup(1,[4],tag=3,name="top_wall")

gmsh.model.geo.synchronize()

# Create the mesh using gmsh
gmsh.option.setNumber("Mesh.CharacteristicLengthMin",1/nx)
gmsh.option.setNumber("Mesh.CharacteristicLengthMax",1/nx)
gmsh.model.mesh.generate(gdim)

# Convert the gmsh model into a FEniCSx mesh
gmsh_model_rank = 0
domain, cell_markers, facet_markers = gmshio.model_to_mesh(gmsh.model, comm, gmsh_model_rank, gdim=gdim)
gmsh.finalize()

topology, cell_types, geometry = plot.vtk_mesh(domain, domain.topology.dim)
grid = pyvista.UnstructuredGrid(topology, cell_types, geometry)
plotter = pyvista.Plotter()
plotter.add_mesh(grid, show_edges=True, color="white")
plotter.view_xy()
plotter.screenshot("moved_mesh.png", transparent_background=True)
plotter.close()

V_moved = fem.functionspace(domain,["CG",1])
u_moved = fem.Function(V_moved)

fine_mesh_cell_map = domain.topology.index_map(domain.topology.dim)
num_cells_on_proc = fine_mesh_cell_map.size_local + fine_mesh_cell_map.num_ghosts
cells = np.arange(num_cells_on_proc, dtype=np.int32)
interpolation_data = fem.create_interpolation_data(V_moved, V, cells, padding=padding)
u_moved.interpolate_nonmatching(u, cells, interpolation_data=interpolation_data)

topology, cell_types, geometry = plot.vtk_mesh(V_moved)
grid = pyvista.UnstructuredGrid(topology, cell_types, geometry)
grid.point_data["u_moved"] = u_moved.x.array

grid.set_active_scalars("u_moved")
plotter = pyvista.Plotter()
plotter.add_mesh(grid, show_edges=False, show_scalar_bar=True)
plotter.view_xy()
figure = plotter.screenshot(f"./u_moved.png", transparent_background=True)
plotter.close()

print("done")

The function on the initial domain is the following:

If I run the code with padding=1e-14, I obtain the following interpolated function:

However, this interpolation assigns 0 to the values of the function in the new region of the domain. If I run the code with padding=10 I obtain the following result, which is exactly what I wanted:

The problem is that padding=10 makes interpolation_data = fem.create_interpolation_data(V_moved, V, cells, padding=padding) extremely slow when the mesh is thin. Do you know any workaround for this? For instance I was thinking of using a bigger padding for the cells in the new region of the domain than for the old cells.

Is the mesh it-self moving, or are you remeshing between the two steps?

Create to different interpolation operators, one with a small padding, over those cells that barely move, an done only over those cells that are perturbed.
For the example at hand, where the mesh is of length 1x1, having a padding of 10 is crazy. One should adjust the padding to be of the magnitude of how far away a cell can be another to be considered for extrapolation.

For now I am remeshing between the two steps. Yes, I agree the padding was too big for this domain but I took a very big one to ensure the result was the one I expected. In the next example I have adjusted the padding value for the global extrapolation.

I have changed the example from

to

nump = nx
x = np.linspace(0, 1, nump)
y = np.sin(x*np.pi)/4 + 1
tag_vec = [2]
for i in range(len(x)-2):
    gmsh.model.occ.addPoint(x[i+1],y[i+1],0,tag=i+5)
    tag_vec.append(i+5)
tag_vec.append(4)

to ensure that the curve on the upper boundary of the domain is smooth.

I have modified the algorithm to extrapolate the values

In the new version, I use a local, bigger, padding for the cells that belong to the new region of the domain and a small one for the other cellls.

h_max = comm.allreduce(max(dolfinx.cpp.mesh.h(domain._cpp_object, 2, np.arange(domain.topology.index_map(2).size_local, dtype=np.int32))), op=MPI.MAX)

# Faster, local extrapolation algorithm
lower_cells = dolfinx.mesh.locate_entities(domain, domain.topology.dim, lambda x: x[1] <= 1 + h_max + 1e-10)
interpolation_data_lower = fem.create_interpolation_data(V_moved, V, lower_cells, padding=2*h_max+1e-10)
u_moved.interpolate_nonmatching(u, lower_cells, interpolation_data=interpolation_data_lower)

distance = h_max
while distance < (np.max(y)-1) + 2*h_max + 1e-10:
    upper_cells = dolfinx.mesh.locate_entities(domain, domain.topology.dim, lambda x: np.logical_and(1 + (distance - 2*h_max) - 1e-10 <= x[1], x[1] <= 1 + distance + 1e-10))
    interpolation_data_upper = fem.create_interpolation_data(V_moved, V, upper_cells, padding=distance+h_max + 1e-10)
    u_moved.interpolate_nonmatching(u, upper_cells, interpolation_data=interpolation_data_upper)
    distance = distance + h_max

Now the time needed for the extrapolation has decreased significantly. If I run the code with nx = 100, the time decreases from 10 minutes with the global extrapolation (using the minimum padding that ensures the extrapolation is carried out over every cell, padding=np.max(y)+ h_max + 1e-10), to only 4 seconds with the local extrapolation. Also, the amount of RAM needed has been significantly reduced.

Here, I post the result extrapolated in the new moved domain (the initial u is still the same than in the previous reply):

1 Like