Extracting nodal values for post processing

Hello everybody, I have a situation of extracting particular nodal value for plotting as means of post processing. Unfortunately, I am not able to find the suitable function for it. So Can anyone please tell me how to extract particular nodal value in dolfinx. In my case I have a square where i need to extract nodal displacement values at the centre node on top surface.

Can anybody please suggest some tip in this regard. Thanks in advance.

Given a dolfinx.fem.FunctionSpace called V, you can extract the coordinates associated to every DOF with V.tabulate_dof_coordinates(). They are returned in the same order as the DOFs, so use for instance numpy to find the coordinate closest to the one you want and the use its index to extract the DOF value from the solution vector.

This assumes that your function space is a Lagrange one (but otherwise you wouldn’t be asking about nodal values), and that you are running in serial (could be extended in parallel to, search the forum with tabulate_dof_coordinates as keyword for more complete examples).

1 Like

You could also have a look at: Implementation — FEniCSx tutorial

1 Like

Thanks for all your responses. Yes sir my problem is with Lagrange one, but it is running in parallel and also it is a time dependent problem where I need to extract the particular point nodal value at all timesteps to have comparison at the last. Since it has been running in parallel, I am looking that i need to gather from processor. Am I going in the correct approach or is it possible simply with tabulate_dof_coordinates idea as you mentioned?

Thank you very much for the support.

If you want to extract data at a single point that does not match with a degree of freedom, I would go for what I linked to above.

If the point in question aligns with degrees of freedom, I would use locate_dofs_geometrical to find index of the degree of freedom to access from u.x.array.

Be sure to enroll the found dof for block size if you are using a vector function space to get all components if the vector at the dof in question.

Thanks sir. I just tried to run tutorial mixed space code, it is confusing to see why running in parallel produces different result compared to running it normally. As the increasing the processor the values further changing. Could anyone please tell me, is there any specific reasons for the same. The example code is

import os
from mpi4py import MPI
from petsc4py import PETSc
import numpy as np
import ufl
from basix.ufl import element, mixed_element
from dolfinx import default_real_type
from dolfinx.fem import Function, functionspace,assemble_scalar
from dolfinx.fem.petsc import NonlinearProblem
from dolfinx.mesh import CellType, create_unit_square
from dolfinx.nls.petsc import NewtonSolver
from ufl import dx, grad, inner

lmbda = 1.0e-02  # surface parameter
dt = 5.0e-06  # time step
theta = 0.5  # time stepping family, e.g. theta=1 -> backward Euler, theta=0.5 -> Crank-Nicholson
msh = create_unit_square(MPI.COMM_WORLD, 4, 4, CellType.triangle)
P1 = element("Lagrange", msh.basix_cell(), 1)
ME = functionspace(msh, mixed_element([P1, P1]))
q, v = ufl.TestFunctions(ME)
u = Function(ME)  # current solution
u0 = Function(ME)  # solution from previous converged step
# Split mixed functions
c, mu = ufl.split(u)
c0, mu0 = ufl.split(u0)

# Zero u
u.x.array[:] = 0.0
# Interpolate initial condition
np.random.seed(30)
u.sub(0).interpolate(lambda x: 0.25  + 0.01 * (0.5 - np.random.rand(x.shape[1])))
u.x.scatter_forward()

# Compute the chemical potential df/dc
c = ufl.variable(c)
f = 100 * c**2 * (1 - c)**2
dfdc = ufl.diff(f, c)

# mu_(n+theta)  
mu_mid = (1.0 - theta) * mu0 + theta * mu

# Weak statement of the equations
F0 = inner(c, q) * dx - inner(c0, q) * dx + dt * inner(grad(mu_mid), grad(q)) * dx
F1 = inner(mu, v) * dx - inner(dfdc, v) * dx - lmbda * inner(grad(c), grad(v)) * dx
F = F0 + F1   

# Create nonlinear problem and Newton solver
problem = NonlinearProblem(F, u)
solver = NewtonSolver(MPI.COMM_WORLD, problem)
solver.convergence_criterion = "incremental"
solver.rtol = np.sqrt(np.finfo(default_real_type).eps) * 1e-2

# We can customize the linear solver used inside the NewtonSolver by
# modifying the PETSc options
ksp = solver.krylov_solver
opts = PETSc.Options()  # type: ignore
option_prefix = ksp.getOptionsPrefix()
opts[f"{option_prefix}ksp_type"] = "preonly"
opts[f"{option_prefix}pc_type"] = "lu"
ksp.setFromOptions()

# Step in time
t = 0.0
timesteps = [0]
#  Reduce run time if on test (CI) server
if "CI" in os.environ.keys() or "GITHUB_ACTIONS" in os.environ.keys():
    T = 3 * dt
else:
    T =  1 * dt
# Get the sub-space for c and the corresponding dofs in the mixed space
# vector
V0, dofs = ME.sub(0).collapse()
c = u.sub(0)
u0.x.array[:] = u.x.array

while (t < T):
    t += dt
    r = solver.solve(u)
    print(f"Step {int(t/dt)}: num iterations: {r[0]}")
    u0.x.array[:] = u.x.array
print (u.x.array)

When I tried with running normal it produces values like

[ 0.25268046  0.25113871  0.25368124 18.6876801  18.70976266 18.66311557
  0.24805195 18.78723264  0.25262463 18.6849591   0.25215371 18.69868973
  0.25259528 18.69255675  0.25236828 18.69802514  0.24901909 18.77010902
  0.24577314 18.84374189  0.24803946 18.79348743  0.24675552 18.82031573
  0.24922422 18.76874523  0.25341396 18.67165108  0.25237948 18.69379297
  0.24902894 18.77342398  0.2514791  18.72305914  0.24916687 18.77011863
  0.25445935 18.63935928  0.24505966 18.86024589  0.24862446 18.78465413
  0.25159698 18.71086092  0.24561697 18.84971027  0.24966788 18.75577913
  0.25484011 18.6462961 ]

But running with processors 4, it produces different numbers, is there any difference? Even I constraint the randomness in code for initial concentration.

[0.25119405 0.25119405 0.2495566  0.2495566  0.2495566  0.2495566 ]
[0.24914252 0.24914252 0.24914252 0.2495566  0.2495566  0.2495566
 0.25153556 0.25153556]
[0.25153556 0.25153556 0.25153556 0.25153556 0.2495566  0.24914252
 0.24914252 0.24914252]
[0.2495566  0.2495566  0.2495566  0.2495566  0.2495566  0.25119405
 0.25119405 0.24914252 0.24914252 0.24914252]

Thanks in advance for all your advices.

I’ll start by pointing out that the randomness is not consistent over multiple processors, as DOLFINx uses MPI-based parallelism, where the mesh and degrees of freedom are partitioned across multiple processors.

You can verify this claim by calling

p.random.seed(30)
def f(x):
    print(x.shape)
    return 0.25  + 0.01 * (0.5 - np.random.rand(x.shape[1]))
u.sub(0).interpolate(f)

Secondly, you need to print the degrees of freedom in the context of its function space.

I would make some slight adaptations to make it easier to see the connection to each node for each sub space:

import os
from mpi4py import MPI
from petsc4py import PETSc
import numpy as np
import ufl
from basix.ufl import element, mixed_element
from dolfinx import default_real_type
from dolfinx.fem import Function, functionspace, assemble_scalar
from dolfinx.fem.petsc import NonlinearProblem
from dolfinx.mesh import CellType, create_unit_square
from dolfinx.nls.petsc import NewtonSolver
from ufl import dx, grad, inner

lmbda = 1.0e-02  # surface parameter
dt = 5.0e-06  # time step
theta = 0.5  # time stepping family, e.g. theta=1 -> backward Euler, theta=0.5 -> Crank-Nicholson
msh = create_unit_square(MPI.COMM_WORLD, 4, 4, CellType.triangle)
P1 = element("Lagrange", msh.basix_cell(), 1)
ME = functionspace(msh, mixed_element([P1, P1]))
q, v = ufl.TestFunctions(ME)
u = Function(ME)  # current solution
u0 = Function(ME)  # solution from previous converged step
# Split mixed functions
c, mu = ufl.split(u)
c0, mu0 = ufl.split(u0)

# Zero u
u.x.array[:] = 0.0
# Interpolate initial condition
u.sub(0).interpolate(lambda x: np.full(x.shape[1], 0.25 + 0.01 * 0.5))
u.x.scatter_forward()

# Compute the chemical potential df/dc
c = ufl.variable(c)
f = 100 * c**2 * (1 - c)**2
dfdc = ufl.diff(f, c)

# mu_(n+theta)
mu_mid = (1.0 - theta) * mu0 + theta * mu

# Weak statement of the equations
F0 = inner(c, q) * dx - inner(c0, q) * dx + \
    dt * inner(grad(mu_mid), grad(q)) * dx
F1 = inner(mu, v) * dx - inner(dfdc, v) * dx - \
    lmbda * inner(grad(c), grad(v)) * dx
F = F0 + F1

# Create nonlinear problem and Newton solver
problem = NonlinearProblem(F, u)
solver = NewtonSolver(MPI.COMM_WORLD, problem)
solver.convergence_criterion = "incremental"
solver.rtol = np.sqrt(np.finfo(default_real_type).eps) * 1e-2

# We can customize the linear solver used inside the NewtonSolver by
# modifying the PETSc options
ksp = solver.krylov_solver
opts = PETSc.Options()  # type: ignore
option_prefix = ksp.getOptionsPrefix()
opts[f"{option_prefix}ksp_type"] = "preonly"
opts[f"{option_prefix}pc_type"] = "lu"
ksp.setFromOptions()

# Step in time
t = 0.0
timesteps = [0]
#  Reduce run time if on test (CI) server
if "CI" in os.environ.keys() or "GITHUB_ACTIONS" in os.environ.keys():
    T = 3 * dt
else:
    T = 1 * dt
# Get the sub-space for c and the corresponding dofs in the mixed space
# vector
V0, dofs = ME.sub(0).collapse()
c = u.sub(0)
u0.x.array[:] = u.x.array

while (t < T):
    t += dt
    r = solver.solve(u)
    print(f"Step {int(t/dt)}: num iterations: {r[0]}")
    u0.x.array[:] = u.x.array
u0, u1 = u.split()
u0 = u0.collapse()
u1 = u1.collapse()
print(u0.function_space.tabulate_dof_coordinates()[:, :msh.geometry.dim], u0.x.array, "\n",
      u1.function_space.tabulate_dof_coordinates()[:, :msh.geometry.dim], u1.x.array)