Calculate the value of a solution on mesh points or in any point

Hi all! I would like to know how to calculate the solution in a given point and in the mesh coordinates. Consider the Poisson example

from mpi4py import MPI
from dolfinx import mesh
from dolfinx.fem import FunctionSpace
from dolfinx import fem
import numpy
import ufl
from petsc4py.PETSc import ScalarType

domain = mesh.create_unit_square(MPI.COMM_WORLD, 8, 8, mesh.CellType.quadrilateral)
V = FunctionSpace(domain, ("CG", 1))

uD = fem.Function(V)
uD.interpolate(lambda x: 1 + x[0]**2 + 2 * x[1]**2)

# Create facet to cell connectivity required to determine boundary facets
tdim = domain.topology.dim
fdim = tdim - 1
domain.topology.create_connectivity(fdim, tdim)
boundary_facets = mesh.exterior_facet_indices(domain.topology)
boundary_dofs = fem.locate_dofs_topological(V, fdim, boundary_facets)
bc = fem.dirichletbc(uD, boundary_dofs)

u = ufl.TrialFunction(V)
v = ufl.TestFunction(V)

f = fem.Constant(domain, ScalarType(-6))
a = ufl.dot(ufl.grad(u), ufl.grad(v)) * ufl.dx
L = f * v * ufl.dx
problem = fem.petsc.LinearProblem(a, L, bcs=[bc], petsc_options={"ksp_type": "preonly", "pc_type": "lu"})
uh = problem.solve()

Consider now that I want the value of uh in the point [0.11,0.27] for example?

In addition to that is there a way to calculate the values of uh in the mesh points if we work on Vector Function Space? The familiar

u=uh.x.array

does not seem to work in this case…

Thanks in advance!

See for instance: Implementation — FEniCSx tutorial for how to evaluate at one (or several points in your domain).

When working with a vector function space (assuming a non-mixed function space), the dofs are blocked as (dof_x(p_0), dof_y(p_0), dof_z(p_0), dof_x(p_1), dof_y(p_1), ...dof_z(p_n)
where these relate to the coordinates of the function space (not necessarily the mesh) by p_i (point i) is V.tabulate_dof_coordinates()[i,:]
If you really want to map mesh nodes to dofs, see: Application of point forces, mapping vertex indices to corresponding DOFs - #2 by dokken

1 Like

Thanks I think it works as it should. I post a code here where the solution is evaluated in the mesh points

from mpi4py import MPI
from dolfinx import *
from dolfinx.fem import FunctionSpace
from dolfinx import fem
import numpy as np
import ufl
from petsc4py.PETSc import ScalarType
import dolfinx
from dolfinx import mesh
from dolfinx import fem
import dolfinx
import meshio

domain = mesh.create_unit_square(MPI.COMM_WORLD, 8, 8, mesh.CellType.quadrilateral)
V = FunctionSpace(domain, ("CG", 2))

uD = fem.Function(V)
uD.interpolate(lambda x: 1 + x[0]**2 + 2 * x[1]**2)

# Create facet to cell connectivity required to determine boundary facets
tdim = domain.topology.dim
fdim = tdim - 1
domain.topology.create_connectivity(fdim, tdim)
boundary_facets = mesh.exterior_facet_indices(domain.topology)
boundary_dofs = fem.locate_dofs_topological(V, fdim, boundary_facets)
bc = fem.dirichletbc(uD, boundary_dofs)

u = ufl.TrialFunction(V)
v = ufl.TestFunction(V)

f = fem.Constant(domain, ScalarType(-6))
a = ufl.dot(ufl.grad(u), ufl.grad(v)) * ufl.dx
L = f * v * ufl.dx
problem = fem.petsc.LinearProblem(a, L, bcs=[bc], petsc_options={"ksp_type": "preonly", "pc_type": "lu"})
uh = problem.solve()


mesh=domain
mesh.topology.create_connectivity(mesh.topology.dim-1, mesh.topology.dim)
boundary_facets = dolfinx.mesh.exterior_facet_indices(mesh.topology)
boundary_dofs = dolfinx.fem.locate_dofs_topological(V, mesh.topology.dim-1, boundary_facets)
dof_coordinates = V.tabulate_dof_coordinates()[boundary_dofs]
points = mesh.geometry.x

u_values = []
from dolfinx import geometry
bb_tree = geometry.BoundingBoxTree(mesh, mesh.topology.dim)
cells = []
points_on_proc = []
# Find cells whose bounding-box collide with the the points
cell_candidates = geometry.compute_collisions(bb_tree, points)
# Choose one of the cells that contains the point
colliding_cells = geometry.compute_colliding_cells(mesh, cell_candidates, points)
for i, point in enumerate(points):
    if len(colliding_cells.links(i))>0:
        points_on_proc.append(point)
        cells.append(colliding_cells.links(i)[0])       
points_on_proc = np.array(points_on_proc, dtype=np.float64)
u_values = uh.eval(points_on_proc, cells)

Thanks for explaining how the dofs are blocked in vector function space. It may be easier to use the above procedure in this case though!

Why is it necessary for the user to explicitly identify the cells for the eval method of Function? Based on the linked tutorial, once the user specifies the points on which to evaluate the function, it seems like dolfinx has all the information needed to identify the cells and evaluate the function. Why not just include the procedure to identify the cells (using the geometry submodule) inside the eval method, and just have the single x argument?

1 Like

It is easy for any user to wrap this up in their own convenience function, with their own needs in mind.

One of the core design principles of DOLFINx is to make it clear for the user what goes on under the hood, and make it easy to get code to run both in serial and parallel.

Issues with hiding away the explicit search for cells:

  1. What happens if you move the mesh, should these points be re-computed or not? When do one decide to recompute bounding boxes and collisions if it is all inside an eval function?

    • If the mesh doesn’t move, should we call this re-computation every time? This will slow down the code. A way of getting around this would be to keep a cache of all points ever searched.
    • If the mesh moves at every time step, the suggested cache approach above will eat up memory usage, and cache lookup will get slower and slower.
  2. parallelism

  • What should happen if one sends in the same point on all processes
  • What should happen if only one process gets a point (and that point might be on the process or not, should it be communicated to other processes)
  1. What should happen if you evaluate at a point where the function does not have a unique solution (for instance in the case of a DG function, which is evaluated at the midpoint of a facet.

Note that these are my personal opinions and the problems that I can envision when adding an abstract level on top of eval. This might change in the future, and if anyone is able to suggest a reasonable abstraction layer through a pull request in DOLFINx, it will of course be considered.

1 Like