Satisfying while condition linked with parallelized variables

Hi all,

I am trying to deal with very interesting problem. I solve the heat equation and I want to stop my solution once I reach some threshold value of temperature for the specified point at the domain. Using this demo and this demo, I managed to get this following MWE:

import matplotlib as mpl
import ufl
import numpy as np

from petsc4py import PETSc
from mpi4py import MPI
from dolfinx import geometry
from dolfinx import fem, mesh, io, plot
from dolfinx.fem.petsc import assemble_vector, assemble_matrix, create_vector, apply_lifting, set_bc

# Define temporal parameters
t = 0  # Start time
T = 1.0  # Final time
num_steps = 50
dt = T / num_steps  # time step size

# Define mesh
nx, ny = 50, 50
domain = mesh.create_rectangle(MPI.COMM_WORLD, [np.array([-2, -2]), np.array([2, 2])],
                               [nx, ny], mesh.CellType.triangle)
V = fem.functionspace(domain, ("Lagrange", 1))

def initial_condition(x, a=5):
    return np.exp(-a * (x[0]**2 + x[1]**2))


u_n = fem.Function(V)
u_n.name = "u_n"
u_n.interpolate(initial_condition)

# Create boundary condition
fdim = domain.topology.dim - 1
boundary_facets = mesh.locate_entities_boundary(
    domain, fdim, lambda x: np.full(x.shape[1], True, dtype=bool))
bc = fem.dirichletbc(PETSc.ScalarType(0), fem.locate_dofs_topological(V, fdim, boundary_facets), V)

xdmf = io.XDMFFile(domain.comm, "diffusion.xdmf", "w")
xdmf.write_mesh(domain)

# Define solution variable, and interpolate initial solution for visualization in Paraview
uh = fem.Function(V)
uh.name = "uh"
uh.interpolate(initial_condition)
xdmf.write_function(uh, t)
# -

# ## Variational problem and solver
# As in the previous example, we prepare objects for time dependent problems, such that we do not have to recreate data-structures.

u, v = ufl.TrialFunction(V), ufl.TestFunction(V)
f = fem.Constant(domain, PETSc.ScalarType(0))
a = u * v * ufl.dx + dt * ufl.dot(ufl.grad(u), ufl.grad(v)) * ufl.dx
L = (u_n + dt * f) * v * ufl.dx

# ## Preparing linear algebra structures for time dependent problems
# We note that even if `u_n` is time dependent, we will reuse the same function for `f` and `u_n` at every time step. We therefore call `dolfinx.fem.form` to generate assembly kernels for the matrix and vector.

bilinear_form = fem.form(a)
linear_form = fem.form(L)

# We observe that the left hand side of the system, the matrix $A$ does not change from one time step to another, thus we only need to assemble it once. However, the right hand side, which is dependent on the previous time step `u_n`, we have to assemble it every time step. Therefore, we only create a vector `b` based on `L`, which we will reuse at every time step.

A = assemble_matrix(bilinear_form, bcs=[bc])
A.assemble()
b = create_vector(linear_form)

# ## Using petsc4py to create a linear solver
# As we have already assembled `a` into the matrix `A`, we can no longer use the `dolfinx.fem.petsc.LinearProblem` class to solve the problem. Therefore, we create a linear algebra solver using PETSc, assign the matrix `A` to the solver, and choose the solution strategy.

solver = PETSc.KSP().create(domain.comm)
solver.setOperators(A)
solver.setType(PETSc.KSP.Type.PREONLY)
solver.getPC().setType(PETSc.PC.Type.LU)



p_center = np.array([0.,0.,0.])
threshold = 0.5
bb_tree = geometry.bb_tree(domain, domain.topology.dim)
cell = []
points_on_proc = []
cell_candidates = geometry.compute_collisions_points(bb_tree, p_center.T)
colliding_cells = geometry.compute_colliding_cells(domain, cell_candidates, p_center.T)
if len(colliding_cells.links(0)) > 0:
    points_on_proc.append(p_center)
    cell.append(colliding_cells.links(0)[0])
points_on_proc = np.array(points_on_proc, dtype=np.float64)

while uh.eval(points_on_proc, cell) < threshold:
    t += dt

    # Update the right hand side reusing the initial vector
    with b.localForm() as loc_b:
        loc_b.set(0)
    assemble_vector(b, linear_form)

    # Apply Dirichlet boundary condition to the vector
    apply_lifting(b, [bilinear_form], [[bc]])
    b.ghostUpdate(addv=PETSc.InsertMode.ADD_VALUES, mode=PETSc.ScatterMode.REVERSE)
    set_bc(b, [bc])

    # Solve linear problem
    solver.solve(b, uh.x.petsc_vec)
    uh.x.scatter_forward()

    # Update solution at previous time step (u_n)
    u_n.x.array[:] = uh.x.array

    # Write solution to file
    xdmf.write_function(uh, t)

xdmf.close()

I am trying to make this code parallel but obviously I fail to handle the values near the while condition. Surprisingly, it runs fine for 2 procs (I think the point just falls into the region where two domains linked) but 3 or more procs give:

ValueError: The truth value of an empty array is ambiguous. Use `array.size > 0` to check that an array is not empty.

Error is very clear indeed. I have tried broadcasting the point and cell data over the processes but it did not work. It seems like there is something going on with eval function which I miss to see.

How should I approach this problem? Any comments would be very appreciated.

The eval function only works local to the process.
What you should do is to use bcast on the result of eval from the given process that has the point to any other.

An alternative way is to set up a “sparse” (in this case not very sparse) communication of the evaluated data.
Here is an example:

p_center = np.array([0.0, 0.0, 0.0])

import dolfinx

point_ownership = collision_data = dolfinx.cpp.geometry.determine_point_ownership(
    domain._cpp_object, p_center, 1e-6
)


def evaluate_function(uh, ownership_data):
    # get size of `u` if it is not a scalar
    import math

    u_size = math.prod(uh.ufl_shape)

    # Evalaute u at points
    values = uh.eval(ownership_data.dest_points, ownership_data.dest_cells).astype(
        np.float64
    )

    # Distribute the evaluated values to all processes
    comm = uh.function_space.mesh.comm
    unique_owners, owner_count = np.unique(
        ownership_data.dest_owners, return_counts=True
    )
    recv, inc_size = np.unique(ownership_data.src_owner, return_counts=True)

    sub_comm = comm.Create_dist_graph_adjacent(recv, unique_owners, reorder=False)
    incoming_data = np.zeros(
        (len(ownership_data.src_owner), u_size), dtype=PETSc.ScalarType
    )
    owner_count *= u_size
    inc_size *= u_size
    s_msg = [values, owner_count, MPI.DOUBLE]
    r_msg = [incoming_data, inc_size, MPI.DOUBLE]
    sub_comm.Neighbor_alltoallv(s_msg, r_msg)
    sub_comm.Free()
    # Decide on reduction if more than a single point, for instance the max value
    return np.max(incoming_data)


threshold = 1.2

while evaluate_function(uh, point_ownership) < threshold:
    ....

where you can decide on how you would like to reduce the operation if you evaluate at multiple points over multiple processes.

1 Like

Thanks Joergen! Will check this in detail.