Mpi communication with discontinuous Galerkin method for diffusion

Dear all,

I have a diffusion problem that runs well with one process. For parallel execution, however, the obtained solution is clearly wrong as can be seen in this picture:

Obviously, the domain is partitioned as it should be but the process communication is wrong. I am using the dolfinx/dolfinx:stable docker image on a workstation running ubuntu. For running my script in parallel is use the command:

# mpirun -n 8 pyhton3 diffusion.py

This is the code that produces the error:

from mpi4py import MPI                                                          
from petsc4py import PETSc                                                      
from dolfinx import io                                                          
from dolfinx.fem import FunctionSpace, Function, form                           
from dolfinx.fem.petsc import (create_vector, create_matrix, assemble_vector,   
                                assemble_matrix, apply_lifting)                 
from dolfinx.io import gmshio                                                   
from ufl import (TestFunction, TrialFunction, jump, avg, CellDiameter,          
                    FacetNormal, dx, dS, dot, lhs, rhs)                         
import gmsh                                                                     
                                                                                
                                                                                
gmsh.initialize()                                                               
                                                                                
gdim = 2                                                                        
gmsh_model_rank = 0                                                             
mesh_comm = MPI.COMM_WORLD                                                      
                                                                                
if mesh_comm.rank == gmsh_model_rank:                                           
    gmsh.model.occ.addDisk(0.5, 0.5, 0, 0.5, 0.5, tag=0)                        
    gmsh.model.occ.synchronize()                                                
    gmsh.option.setNumber("Mesh.CharacteristicLengthMin",0.01)                  
    gmsh.option.setNumber("Mesh.CharacteristicLengthMax",0.01)                  
    gmsh.model.addPhysicalGroup(gdim, [0], 1)                                   
    gmsh.model.setPhysicalName(gdim, 1, "rectangles")                           
    gmsh.model.mesh.generate(gdim)                                              
                                                                                
domain, cell_markers, facet_markers = gmshio.model_to_mesh(gmsh.model, MPI.COMM_WORLD, gmsh_model_rank, gdim=gdim)
                                                                                
gmsh.finalize()                                                                 
                                                                                
V = FunctionSpace(domain, ("DG", 0))                                            
                                                                                
y = Function(V)                                                                 
h = TrialFunction(V)                                                            
v = TestFunction(V)                                                             
n = FacetNormal(domain)                                                         
d = CellDiameter(domain)                                                        
                                                                                
d_avg = (d('+') + d('-'))/2  # average cell diamerter                           
                                                                                
t  = 0.0  # initial time                                                        
dt = 1e-5 # time increment                                                      
                                                                                
xdmf = io.XDMFFile(domain.comm, "concentration.xdmf", "w")                      
xdmf.write_mesh(domain)                                                         
                                                                                
# initial condition                                                             
y_n = Function(V)                                                               
y_n.interpolate(lambda x: 0.01 + 0.98*(((x[0]-0.15)**2+(x[1]-0.3)**2)**0.5 < 0.12)
                               + 0.98*(((x[0]-0.45)**2+(x[1]-0.55)**2)**0.5 < 0.13)
                               + 0.98*(((x[0]-0.7)**2+(x[1]-0.2)**2)**0.5 < 0.15)
                               + 0.98*(((x[0]-0.2)**2+(x[1]-0.67)**2)**0.5 < 0.148)
                               + 0.98*(((x[0]-0.7)**2+(x[1]-0.6)**2)**0.5 < 0.09)
                               + 0.98*(((x[0]-0.65)**2+(x[1]-0.8)**2)**0.5 < 0.1))
                                                                                
                                                                                
xdmf.write_function(y_n, t)                                                     
                                                                                
# linear variational problem                                                    
F = (y_n-h)*v*dx - dt/d_avg*dot(jump(h , n), jump(v, n))*dS                     
                                                                                
a = form(lhs(F))                                                                
L = form(rhs(F))                                                                
A = create_matrix(a)                                                            
b = create_vector(L)                                                            
                                                                                
solver = PETSc.KSP().create(domain.comm)                                        
solver.setOperators(A)                                                          
solver.setType(PETSc.KSP.Type.BCGS)                                             
solver.getPC().setType(PETSc.PC.Type.JACOBI)                                    
                                                                                
                                                                                
for i in range(500):                                                            
                                                                                
    # solve linear system                                                       
    A.zeroEntries()                                                             
    assemble_matrix(A, a, bcs=[])                                               
    A.assemble()                                                                
    with b.localForm() as loc:                                                  
        loc.set(0)                                                              
    assemble_vector(b, L)                                                       
    apply_lifting(b, [a], [[]])                                                 
    b.ghostUpdate()                                                             
    solver.solve(b, y.vector)                                                   
    y.x.scatter_forward()                                                       
                                                                                
    # update solution                                                           
    y_n.x.array[:] = y.x.array                                                  
                                                                                
    # update time                                                               
    t += dt                                                                     
                                                                                
    # save solution                                                             
    xdmf.write_function(y, t)                                                   
                                                                                
xdmf.close()                                   

That’s an interesting one.

For what it’s worth, I cannot reproduce your problem @wolfla. Same docker container, same code, same mpirun command (save for the typo).

Looks like a hardware problem.

It is because model_to_mesh uses GhostMode.none by default, see:

You would have to call

domain, cell_markers, facet_markers = gmshio.model_to_mesh(gmsh.model, MPI.COMM_WORLD, gmsh_model_rank, gdim=gdim, partitioner=dolfinx.mesh.create_cell_partitioner(dolfinx.mesh.GhostMode.none))

Thanks for the quick response @dokken and @hawkspar! It turned out that I accidentally named a locally built dolfinx container (using dolfinx/dolfinx-onbuild) just like the official dolfinx/dolfinx container. There was something wrong with this locally built image. The above code works now also for me, thanks.