Different results NewtonSolve parallel mpi.comm_world

Dear all,

I’m very new to fenicsx and mpi. I try to run my code in parallel as I’m going to use big meshes.
I broke down the code to a small example in order to compare the results for c for running it on one core and on multiple cores.

Somehow, I get every time I run the code in parallel (randomly) a different result (3 different possible solutions).

Maybe I’m missing out some essential part… I also tried to update the ghosts and tried set_bc…same result.

I’d appreciate some help and advice! Thank you!

import numpy as np
from mpi4py import MPI
from dolfinx import mesh, fem, default_scalar_type, nls, plot
from dolfinx.mesh import create_mesh, create_unit_cube
from dolfinx.fem import functionspace
from dolfinx import *
import dolfinx.nls.petsc
import ufl
from ufl import SpatialCoordinate
from mmapy import gcmmasub, mmasub, subsolv, asymp, concheck, raaupdate
import dolfinx
from petsc4py import PETSc


#create mesh / create_unit_cube: cubic mesh with corners [0,0,0] and [1,1,1]
domain = mesh.create_unit_cube(MPI.COMM_WORLD, 20, 20, 20, mesh.CellType.hexahedron, ghost_mode=dolfinx.cpp.mesh.GhostMode.shared_facet)
#
#, ghost_mode=dolfinx.cpp.mesh.GhostMode.none
tdim = domain.topology.dim
fdim = tdim - 1
#domain.topology.create_entities(fdim)
domain.topology.create_connectivity(fdim, tdim)

#define function space
V = functionspace(domain, ("Lagrange", 1))

#concentration c, porosity eps
eps = fem.Function(V) #design variable
phi = ufl.TestFunction(V)

eps.x.array[:] = 0.5 #uD = fem.Function(V)

##Dirichlet 
def on_boundary(x):
    return np.isclose(x[0], 0)
dofs_L = fem.locate_dofs_geometrical(V , on_boundary)
bc_L = fem.dirichletbc(default_scalar_type(1), dofs_L, V) ##Dirichlet = 1

#Parameter
D0 = 1;
k0 = -1.0;
eta2 = 2.0;
gamma = 2.0;
Rfl = 0.0072

for loop in range(0,11):
    c = fem.Function(V)
    f = fem.Constant(domain, default_scalar_type(0))
    R = ufl.inner(D0*eps**eta2*ufl.grad(c), ufl.grad(phi))*ufl.dx - ufl.inner(k0*(1-eps)**gamma*c,phi)*ufl.dx - ufl.inner(f,phi) *ufl.ds 
    R_form = dolfinx.fem.form(R)

    problem_u = fem.petsc.NonlinearProblem(R, c, bcs=[bc_L])

    solver = nls.petsc.NewtonSolver(domain.comm,problem_u)

    uh = solver.solve(c)

    c.x.scatter_forward()
    
    np.save('c',c.x.array)

If you refer to

Giving different solutions in parallel, this is not suprising, as when the code is executed with MPI the mesh is distributed, which in turn means that the solution is distributed across the requested processes. No single process store the full solution.
Some time ago I posted how to gather solutions: Gather solutions in parallel in FEniCSX - #2 by dokken
and an explanation of parallel distribution at
Parallel Computations with Dolfinx using MPI — MPI tutorial

Dear dokken,

Many thanks!
I implemented the gathering and it seems to work well!
I’m still reading through your posts in order to understand the mpi better, but I have another question. So, I have to gather the solution for rank == 0. Now, if I want to use this gathered solution as an input of another PDE in the same iteration loop, how could I implement that?
Or can I let the whole code run in parallel and gather the final results?
Thank you!

I would never advise gathering results in the way you suggested above (writing out the degrees of freedom as an array).
There are specific output formats (.pvd, .xdmf, .bp) that handles the parallel writing of data related to meshes/functions.

What do you need the dof values for (as they need a context, i.e. their degree of freedom, corresponding cell etc to be of any usage)?

1 Like

Thank you again!
What I actually would like to do is topology optimization, in particular:

  1. solve the equation for c (above)
  2. add the values for c into a sensitivity equation and solve it
  3. use mma / gcmma in order to update a design value
    And ideally do that with multiple cores using mpi.
    So, when I use comm_self it all works fine (as the whole mesh is processed on every core ?). But, when I run it with comm_world, I get odd solutions, which I think is coming from inserting local/partial results instead of the gathered one into the sensitivity and update equations.
    Could you elaborate on the specific output formats, please?

Appreciate your help very much! Thank you!

You should be able to run this in parallel, see for instance

or

http://jsdokken.com/FEniCS-workshop/src/applications/optimal_control.html#the-gradient-of-the-functional

Thank you!

So, if I understand correct, I have to take care of using petsc.assemble and updating ghosts at the right places. As the mesh is divided for all equations used in the code, the calculations should be correct (but divided). Then in the end, I have to gather the solutions for the input needed for the next iteration (as in the case for c above).

Yes. The forward and adjoint computations can be done with a nonlinear or linear solver in DOLFINx. However, when computing the sensitivity, you have to accumulate contributions across processes.