Setting periodic constraints for large meshes

Dear all,
I am trying to handle large meshes and try to set periodic constraints.
However, I often get RuntimeErrors for meshes with a large number of DOFs. Is there a possibility to circumvent this? Is it related to the parallel communication (I used 10 CPUs for the MWE).

This is the error message:

  mpc_data = dolfinx_mpc.cpp.mpc.create_periodic_constraint_topological(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Newton method failed to converge for non-affine geometry

And this is the MWE:

import dolfinx
import dolfinx_mpc
from mpi4py import MPI
import basix.ufl
import numpy as np


n = [100, 400]
for n_size in n:
    domain = dolfinx.mesh.create_unit_cube(MPI.COMM_WORLD,n_size, n_size, n_size, cell_type=dolfinx.mesh.CellType.hexahedron)

    # funcitonspace
    P1 = basix.ufl.element("Lagrange", domain.topology.cell_name(), 1, gdim=domain.geometry.dim)
    V  = dolfinx.fem.functionspace(domain, P1)

    ## periodic boundary conditions
    mpc = dolfinx_mpc.MultiPointConstraint(V)

    def periodic_boundary_y(x):
        return np.isclose(x[1], 1.0, atol=1e-4)

    def periodic_relation_y(x):
        out_x = np.zeros_like(x)
        out_x[0] = x[0]
        out_x[1] = 1 - x[1]
        out_x[2] = x[2]
        return out_x

    bcs = []
    mpc.create_periodic_constraint_geometrical(V, periodic_boundary_y, periodic_relation_y, bcs)
    mpc.finalize()

    if MPI.COMM_WORLD.rank == 0:
        print(f"Succesfully created periodic constraint for size {n_size}.")

Is there any smaller example that reproduces the issue?
I only have a laptop over the weekend, and running a problem with 64 million dofs is out of scope for it.

One thing I would try is to change

to

    def periodic_boundary_y(x):
        return np.isclose(x[1], 1.0, atol=1e-10)

as your tolerance is mapping is very large compared to the element mesh size.

I think I’ve located the issue. The issue is that when the elements get this small, my tolerance in pull back for determining the intersection goes very close to machine precision.
See: Make tolerance slightly larger by jorgensd · Pull Request #104 · jorgensd/dolfinx_mpc · GitHub for the suggested change.
At least the issue is resolved for 180x180x180 mesh with 7 procs:

import dolfinx
import dolfinx_mpc
from mpi4py import MPI
import basix.ufl
import numpy as np


n = [180]
tol = 1e-8
for n_size in n:
    domain = dolfinx.mesh.create_unit_cube(
        MPI.COMM_WORLD, n_size, n_size, n_size, cell_type=dolfinx.mesh.CellType.hexahedron)

    # funcitonspace
    P1 = basix.ufl.element(
        "Lagrange", domain.topology.cell_name(), 1, gdim=domain.geometry.dim)
    V = dolfinx.fem.functionspace(domain, P1)
    print(domain.comm.rank, V.dofmap.index_map.size_local,
          "/", V.dofmap.index_map.size_global)
    # periodic boundary conditions
    mpc = dolfinx_mpc.MultiPointConstraint(V)

    def periodic_boundary_y(x):
        return np.isclose(x[1], 1.0, atol=tol)

    def periodic_relation_y(x):
        out_x = np.zeros_like(x)
        out_x[0] = x[0]
        out_x[1] = 1 - x[1]
        out_x[2] = x[2]
        return out_x

    bcs = []
    mpc.create_periodic_constraint_geometrical(
        V, periodic_boundary_y, periodic_relation_y, bcs)
    mpc.finalize()
    MPI.COMM_WORLD.Barrier()
    if MPI.COMM_WORLD.rank == 0:
        print(f"Succesfully created periodic constraint for size {n_size}.")
1 848968 / 5929741
2 848458 / 5929741
3 839859 / 5929741
5 852260 / 5929741
0 838798 / 5929741
Succesfully created periodic constraint for size 180.
6 857276 / 5929741
4 844122 / 5929741

Tested for 200 and 220

0 1146683 / 8120601
Succesfully created periodic constraint for size 200.
1 1162537 / 8120601
2 1171795 / 8120601
3 1151006 / 8120601
5 1162744 / 8120601
6 1168585 / 8120601
4 1157251 / 8120601

4 1548939 / 10793861
6 1552706 / 10793861
0 1518867 / 10793861
Succesfully created periodic constraint for size 220.
3 1546520 / 10793861
2 1541853 / 10793861
1 1542862 / 10793861
5 1542114 / 10793861
1 Like

Alright, that’s reasonable. I will try this on Monday when I have access to more computing power.
I will also test it on my own meshes, but if it does not work I’ll know what the issue is and I will increase the element sizes (e.g. to another physical unit).
Thank you very much for your help!

I executed your code and it does not work for me as I am getting the same error as described. However, last week, I also tested it for a 190x190x190 mesh and this worked irregularly - only sometimes.

I also used a box with the coordinates [0,0,0] → [180,180,180] with the element size of 1.0 and this did not work as well. Do you have any other idea?
Edit: It works for n = 180, 200, 220, 240, but not for e.g. 260 elements in each direction. Are there any limits?

import dolfinx
import dolfinx_mpc
from mpi4py import MPI
import basix.ufl
import numpy as np


n = [180]
tol = 1e-8
for n_size in n:
    domain = dolfinx.mesh.create_box(MPI.COMM_WORLD, [np.array([0, 0, 0]), np.array([n_size, n_size, n_size])], [n_size, n_size, n_size], cell_type=dolfinx.mesh.CellType.hexahedron)

    # functionspace
    P1 = basix.ufl.element(
        "Lagrange", domain.topology.cell_name(), 1, gdim=domain.geometry.dim)
    V = dolfinx.fem.functionspace(domain, P1)
    print(domain.comm.rank, V.dofmap.index_map.size_local,
          "/", V.dofmap.index_map.size_global)
    # periodic boundary conditions
    mpc = dolfinx_mpc.MultiPointConstraint(V)

    def periodic_boundary_y(x):
        return np.isclose(x[1], 1.0, atol=tol)

    def periodic_relation_y(x):
        out_x = np.zeros_like(x)
        out_x[0] = x[0]
        out_x[1] = 1 - x[1]
        out_x[2] = x[2]
        return out_x

    bcs = []
    mpc.create_periodic_constraint_geometrical(
        V, periodic_boundary_y, periodic_relation_y, bcs)
    mpc.finalize()
    MPI.COMM_WORLD.Barrier()
    if MPI.COMM_WORLD.rank == 0:
        print(f"Succesfully created periodic constraint for size {n_size}.")

Error message:

7 372199 / 5929741
7 2951568 / 47045881
15 378339 / 5929741
15 2949724 / 47045881
...
    mpc_data = dolfinx_mpc.cpp.mpc.create_periodic_constraint_geometrical(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Newton method failed to converge for non-affine geometry
    mpc_data = dolfinx_mpc.cpp.mpc.create_periodic_constraint_geometrical(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Newton method failed to converge for non-affine geometry
...
14 379141 / 5929741
14 2822049 / 47045881
...

I would guess it hits a similar issue, as you are reducing the mesh size more and more.
You could try to tweak that tolerance yourself: Make tolerance slightly larger by jorgensd · Pull Request #104 · jorgensd/dolfinx_mpc · GitHub
and maybe add some outputting to see how big the actual error is.

1 Like

I do not reduce the mesh size, the mesh size is always 1.0, as

n = [180]
for n_size in n:
    domain = dolfinx.mesh.create_box(MPI.COMM_WORLD, 
    [np.array([0, 0, 0]), np.array([n_size, n_size, n_size])], [n_size, n_size, n_size], 
    cell_type=dolfinx.mesh.CellType.hexahedron)

But I will try to modify the tolerance.
Thank you!

Isn’t the periodic relationshup then wrong?

as it should be isclose(x[1], n_size) and similarly for the mapping.

Oh… Yes, of course. But the problem was still there.
Modifying the tolerance is the solution! Thank you very much!

@tuderic could you comment on how large you had to make the tolerance.

I just changed the pull-back tolerance as you suggested:

cmap.pull_back_nonaffine(Xp, xp, coord_dofs,
                               5000 * std::numeric_limits<U>::epsilon(), 15);

Now it seems to work without error, even for meshes with n=500 elements in each direction.
The tolerance for the np.isclose function was set to default with 1e-8.

1 Like

Unfortunately, I have to come back to the problem as it is not fully solved.

Setting the tolerance higher does not lead to an error when executing the file. However, looking at the result files, there are some DOFs at the boundaries that are not periodic at the surfaces, although most of the DOFs at the surface are mapped correctly. (See the figure: this is a surface of the 3D mesh; only the DOFs that seems to be irregular are not periodic.)

The issue occured for instance for a mesh size with 200x200x200, but not for a mesh size of 250x250x250, so it seems not clear to me. I played around with the tolerance

for 750, 1000, 5000, 50000 and 500000 that always leads to the same results.

Do you have any idea about the issue?

I would add some debug printing inside of the pull back, to see what goes wrong.
A reproducible example would be favorable, so I could try to run it myself.

Would it be okay if I send you a link to the mesh files as I don’t use an in-built mesh?

Sure, but it will then take longer for me to debug the issue.

Thanks! Here is the mesh and the reproducible example.

And here is a reproducible example with an built-in mesh.

You can vary the n_size and see that it works for, e.g. n_size=20 but there is a (small) error for n_size=200.

import dolfinx
from dolfinx.fem import FunctionSpace, Function, Constant, assemble_scalar, form, Expression
from dolfinx.io import XDMFFile
from dolfinx.mesh import locate_entities, meshtags, locate_entities_boundary
import dolfinx_mpc
import basix.ufl

import numpy as np
from mpi4py import MPI
from ufl import grad, dot, FacetNormal, Measure, TrialFunction, TestFunction, lhs, rhs, SpatialCoordinate
from petsc4py.PETSc import ScalarType

# Initialize Parallelization
############################
comm = MPI.COMM_WORLD
rank = comm.rank

def print0(string: str):
    """Print on rank 0 only"""
    if MPI.COMM_WORLD.rank == 0:
        print(string)

# mesh
######
n_size = 200
domain = dolfinx.mesh.create_unit_cube(
        MPI.COMM_WORLD, n_size, n_size, n_size, cell_type=dolfinx.mesh.CellType.hexahedron)

radius = 0.1
midpoint = 0.5
def cells_sphere(x):
    return ((x[0]-midpoint)**2 + (x[1]-midpoint)**2 + (x[2]-midpoint)**2) <= radius**2

def cells_cube(x):
    return ((x[0]-midpoint)**2 + (x[1]-midpoint)**2 + (x[2]-midpoint)**2) >= radius**2

cells_0 = locate_entities(domain, domain.topology.dim, cells_sphere)
cells_1 = locate_entities(domain, domain.topology.dim, cells_cube)

# material parameters
#####################
S = FunctionSpace(domain, ("DG", 0))
kappa = Function(S)

kappa.x.array[:] = 2.5
kappa.x.array[cells_0] = np.full_like(cells_0, 5.0, dtype=ScalarType)
kappa.x.array[cells_1] = np.full_like(cells_1, 1.0, dtype=ScalarType)

# meshing measures
dx = Measure('dx', domain=domain)  # Different volumes
x = SpatialCoordinate(domain)
domain.topology.create_connectivity(domain.topology.dim - 1,
                                    domain.topology.dim)

# Element formulation
P1 = basix.ufl.element("Lagrange", 
domain.topology.cell_name(), 1, gdim=domain.geometry.dim)
DG0 = basix.ufl.element("DG", domain.topology.cell_name(), 0, gdim=domain.geometry.dim, shape=(domain.geometry.dim,))

# Define FunctionSpaces
V = dolfinx.fem.functionspace(domain, P1)
W = dolfinx.fem.functionspace(domain, DG0)
T_ges = Function(V)
T_grad = Function(W)

# define trial and test functions
T_fluc = TrialFunction(V)
dT_fluc = TestFunction(V)

# boundary conditions
#####################
# prescribed "loads"
T_gradienten = []
for iii in ([1e-4]):
    T_gradienten.append((iii, 0, 0))
    T_gradienten.append((0, iii, 0))
    T_gradienten.append((0, 0, iii))
T_gradienten = tuple(T_gradienten)

T_gradient = Constant(domain, ScalarType((0,0,0)))      # Definition of a temperature gradient: can be changed later


bcs = []

# initialize Periodic Boundary Conditions
#########################################
pbc_directions = []
pbc_slave_tags = []
pbc_is_slave = []
pbc_is_master = []
pbc_meshtags = []
pbc_slave_to_master_maps = []

mpc = dolfinx_mpc.MultiPointConstraint(V)

def generate_pbc_slave_to_master_map(i):
    def pbc_slave_to_master_map(x):
        out_x = x.copy()
        out_x[i] = x[i] - domain.geometry.x[:, i].max()
        return out_x

    return pbc_slave_to_master_map

def generate_pbc_is_slave(i):
    return lambda x: np.isclose(x[i], domain.geometry.x[:, i].max())

def generate_pbc_is_master(i):
    return lambda x: np.isclose(x[i], domain.geometry.x[:, i].min())

for i in range(domain.topology.dim):

    pbc_directions.append(i)
    pbc_slave_tags.append(i + 3)
    pbc_is_slave.append(generate_pbc_is_slave(i))
    pbc_is_master.append(generate_pbc_is_master(i))
    pbc_slave_to_master_maps.append(generate_pbc_slave_to_master_map(i))

    facets = locate_entities_boundary(domain, domain.topology.dim -1, pbc_is_slave[-1])
    arg_sort = np.argsort(facets)
    pbc_meshtags.append(meshtags(domain,
                                 domain.topology.dim -1,
                                 facets[arg_sort],
                                 np.full(len(facets), pbc_slave_tags[-1], dtype=np.int32)))

N_pbc = len(pbc_directions)
for i in range(N_pbc):

    # slave/master mapping of opposite surfaces (without slave-slave[-slave] intersections)
    def pbc_slave_to_master_map(x):
        out_x = pbc_slave_to_master_maps[i](x)
        # remove edges that are connected to another slave surface
        idx = np.logical_or(pbc_is_slave[(i + 1) % N_pbc](x),pbc_is_slave[(i + 2) % N_pbc](x))
        out_x[i][idx] = np.nan
        return out_x

    mpc.create_periodic_constraint_topological(V, pbc_meshtags[i],
                                                   pbc_slave_tags[i],
                                                   pbc_slave_to_master_map,
                                                   bcs)

if len(pbc_directions) > 1:
    def generate_pbc_slave_to_master_map_edges(dir_i, dir_j):
        def pbc_slave_to_master_map_edges(x):
            '''
            Maps the slave edge dofs [intersection of slave_i, slave_j] (i=1,j=1) to
            master corner dof [master_i, master_j] (i=0,j=0)
            (1) map slave_x, slave_y to master_x, master_y: i=0, j=1
            (2) map slave_x, slave_z to master_x, master_z: i=0, j=2
            (3) map slave_y, slave_z to master_y, master_z: i=1, j=2
            '''
            out_x = x.copy()
            out_x[dir_i] = x[dir_i] - domain.geometry.x[:, i].max()
            out_x[dir_j] = x[dir_j] - domain.geometry.x[:, i].max()
            idx = np.logical_and(pbc_is_slave[dir_i](x), pbc_is_slave[dir_j](x))
            out_x[dir_i][~idx] = np.nan
            out_x[dir_j][~idx] = np.nan
            # remove corner DOFs at point (1,1,1)
            idx_corner = np.logical_and(pbc_is_slave[0](x), np.logical_and(pbc_is_slave[1](x), pbc_is_slave[2](x)))
            out_x[dir_i][idx_corner] = np.nan
            out_x[dir_j][idx_corner] = np.nan
            return out_x
        return pbc_slave_to_master_map_edges

    def pbc_slave_to_master_map_corner(x):
        '''
        Maps the slave corner dof [intersection of slave_x, slave_y, slave_z] (1,1,1) to
        master corner dof [master_x, master_y, master_z] (0,0,0)
        '''
        out_x = x.copy()
        out_x[0] = x[0] - domain.geometry.x[:, i].max()
        out_x[1] = x[1] - domain.geometry.x[:, i].max()
        out_x[2] = x[2] - domain.geometry.x[:, i].max()
        idx_corner = np.logical_and(pbc_is_slave[0](x), np.logical_and(pbc_is_slave[1](x), pbc_is_slave[2](x)))
        out_x[0][~idx_corner] = np.nan
        out_x[1][~idx_corner] = np.nan
        out_x[2][~idx_corner] = np.nan
        return out_x

    mapping_slave_intersections = [(0,1),(0,2),(1,2)] # pairs of slave intersections

    # mapping slave intersection node (corner) to master intersection node (opposite corner)
    mpc.create_periodic_constraint_topological(V, pbc_meshtags[0],
                                               pbc_slave_tags[0],
                                               pbc_slave_to_master_map_corner,
                                               bcs)

    for inters in mapping_slave_intersections:
        # mapping slave intersections to opposite master intersections
        mpc.create_periodic_constraint_topological(V, pbc_meshtags[inters[0]],
                                                   pbc_slave_tags[inters[0]],
                                                   generate_pbc_slave_to_master_map_edges(inters[0], inters[1]),
                                                   bcs)

mpc.finalize()

# weak formulation
##################
eqn = - kappa * dot(grad(dot(T_gradient , x) + T_fluc), grad(dT_fluc)) * dx

# solve
#######
a_form = lhs(eqn)
L_form = rhs(eqn)
petsc_options = {"ksp_rtol": 1e-9, "ksp_atol": 1e-9, "pc_type": "jacobi", "ksp_type": "cg"}
problem = dolfinx_mpc.LinearProblem(a_form, L_form, mpc, bcs=bcs, petsc_options=petsc_options)
i_n = 0

# get total volume
V_tot = domain.comm.allreduce(assemble_scalar(form(1 * dx)), op=MPI.SUM)

# create XDMF files
file_results_1 = XDMFFile(domain.comm, f"T_fluc_TestPerRB_{n_size}.xdmf", "w")
file_results_1.write_mesh(domain)

for T_grad_i in range(len(T_gradienten)):
    i_n += 1

    T_gradient.value = T_gradienten[T_grad_i]

    T_fluc = problem.solve()
    T_fluc.name = 'T_fluc'

    # total temperature
    T_ges_expr = Expression(dot(T_gradient, x) + T_fluc, V.element.interpolation_points())
    T_ges.interpolate(T_ges_expr)

    # temperature gradient
    T_grad_expr = Expression(grad(T_ges), W.element.interpolation_points())
    T_grad.interpolate(T_grad_expr)

    # save to files
    file_results_1.write_function(T_fluc, i_n)

    # compute average temperature gradient
    T_grad_avg = np.zeros(domain.topology.dim)
    for k in range(domain.topology.dim):
        T_grad_avg[k] = 1/V_tot * domain.comm.allreduce(assemble_scalar(form( T_grad[k] * dx)), op=MPI.SUM)

    print0(f"Averaged temperature gradient in x: {T_grad_avg[0]} = {T_gradient.value[0]}?; in y: {T_grad_avg[1]}  = {T_gradient.value[1]}?; in z: {T_grad_avg[2]} = {T_gradient.value[2]}? ")

file_results_1.close()

Thanks, I’ll try to have a look tomorrow.

1 Like

You didn’t have time to look at it, did you?

No, it’s been busy days. I’ll keep you posted once I’ve had time for this.

1 Like