How to obtain the computational error of the global matrix in parallel computing?

Hi, I have a problem about parallel computing in the dolfinx.

In an interleaved iterative computation, the program needs to obtain the least squares error of the complete matrix, then output the error and determine if the next iteration is needed. However, parallel computation splits the matrix and calculates their errors separately. Is there any appropriately way to summarize the resultant errors of global?

Thank, you!

Could you please show a minimal reproducible example of what you mean?

Do you mean the error \vert\vert Ax-b \vert\vert_{l_2}?

Yes, it is.

And, I am so sorry that I cannot provide a complete minimum example. I have tried my best to demonstrate the structure of the program, especially the final iteration computation part. Thank you.

from mpi4py import MPI
from dolfinx import mesh, fem
from dolfinx.fem.petsc import NonlinearProblem
from dolfinx.nls.petsc import NewtonSolver
from petsc4py import PETSc
import dolfinx as df
import numpy as np
import ufl

def NewtonSolver_Option(solver):        # PETSc KSP 
    solver.convergence_criterion = "residual"
    solver.error_on_nonconvergence = True
    solver.rtol = 1e-5
    solver.atol = 1e-5
    solver.max_it = 500
    solver.relaxation_parameter = 1
    solver.report = True
    ksp = solver.krylov_solver
    opts = PETSc.Options()
    option_prefix = ksp.getOptionsPrefix()
    opts[f"{option_prefix}pc_factor_mat_solver_type"] = "mumps"
    ksp.setFromOptions()
    return solver

domain = mesh.create_unit_square(MPI.COMM_WORLD, 50, 50, mesh.CellType.quadrilateral)

U = fem.functionspace(domain, ('CG', 1, (2, )))     # Displacement
D = fem.functionspace(domain, ('CG', 1))            # Damage field

u = fem.Function(U, name="Displacment")
d = fem.Function(D, name="Damage")

# parameter
E = 2e4
nu = 0.2
Gf = 0.2
ft = 3
b = 0.1

lambda_ = E * nu / (1 + nu)/(1 - 2 * nu)
mu = E / 2 / (1 + nu)

# BC
tdim = domain.topology.dim
fdim = tdim - 1
domain.topology.create_connectivity(fdim, tdim)

load_value = fem.Constant(domain, df.default_scalar_type((0, 0)))
bc_load = fem.dirichletbc(load_value, fem.locate_dofs_geometrical(U, lambda x: np.isclose(x[0], 1)), U)
bc_fix = fem.dirichletbc(df.default_scalar_type((0, 0)), fem.locate_dofs_geometrical(U, lambda x: np.isclose(x[0], 0)), U)
bcs = [bc_load, bc_fix]

# function
def epsilon(u):
    return ufl.sym(ufl.grad(u))

def sigma(u):
    return lambda_ * ufl.nabla_div(u) * ufl.Identity(len(u)) + 2 * mu * epsilon(u)

def omega(d):       # p=2
    # there is a function of omega(d)
    return 1

def omega_deri(d):
    # there is the derivative of omega(d)
    return 0

def Y(u):
    part_1 = (sigma(u)[0, 0] + sigma(u)[1, 1]) / 2 
    part_2 = ufl.sqrt(((sigma(u)[0, 0] - sigma(u)[1, 1]) / 2.) ** 2 + sigma(u)[0, 1] ** 2)
    s1, s2 = part_1 + part_2, part_1 - part_2
    s_max = ufl.max_value(s1, s2)
    sigma_eff = (s_max + abs(s_max)) / 2
    return sigma_eff ** 2 / 2 / E

# create solvers
u_ = ufl.TestFunction(U)
d_ = ufl.TestFunction(D)

displacement = ufl.inner(omega(d) * sigma(u), epsilon(u_)) * ufl.dx
phase_field = (1.273239545 * ((1. - d) * d_ + 0.01 * ufl.dot(ufl.grad(d), ufl.grad(d_))) + omega_deri(d) * Y(u) * d_) * ufl.dx

problem_u = NonlinearProblem(displacement, u, bcs)
solver_u = NewtonSolver(MPI.COMM_WORLD, problem_u)
solver_u = NewtonSolver_Option(solver_u)

problem_d = NonlinearProblem(phase_field, d, [])    # In fact, the SENS solver (Type: vinewtonrsls) is required in d problem.
solver_d = NewtonSolver(MPI.COMM_WORLD, problem_d)  # I will ignore the configuration of SNES for minimal example.
solver_d = NewtonSolver_Option(solver_d)

# set load
load_start = 0
load_step = 0.01

# load and iteration
step = 0; step_max = 50
iter = 0; iter_tol = 1e-6
iter_num_max = 200
d_iter = d.copy()
d_iter_error = d.copy()

while step < step_max:      # Regarding the design of parallel compution for this part, especially the iterative process.
    iter_num = 0
    load_value.value = (load_start + step * load_step, 0)
    d_iter.x.array[:] = d.x.array
    while iter_num <= iter_num:
        solver_u.solve(u)
        solver_d.solve(d)
        d_iter_error.x.array[:] = d_iter.x.array - d.x.array
        d_error = np.linalg.norm(d_iter_error.x.array, ord=np.Inf)      # I hope this error is calculated as the global error
        print(f"Damage Error: {d_error}")
        if d_error <= iter_tol:         # The global error determines whether each process continues the iteration.
            break
        d_iter.x.array[:] = d.x.array
        iter_num += 1
    print("\n------------- Next step -------------\n")
    step += 1

This is not what you are trying to do here?
There are no matrices involved in the d_iter_error computation.
There is however a distributed vector.
However, writhing dolfinx there are operations for collective norms, see

which would be
dolfinx.la.norm(d_iter_error.x)

As a side note, as you are solving a non-linear problem, can’t you just do the l2 error of the assembled residual?

Thank you. I will try dolfinx.la.norm(d_iter_error.x) in the program and take your advice in consideration.

Btw, will the dolfinx’s API for the SNES solver be released in the next version (dolfinx v0.10)? :slight_smile:

Hopefully, yes. There is a PR at

Thank you very much!