Using set_local in Parallel

Dear all,

I’m trying to set the values of a function depending on the location of each point. The values come from a complicated analytical function. The code runs without a problem in series. I’m trying to get the code to run in parallel. When run in parallel using;

mpirun -np 4 python minimum_working_example.py 

the assigned value in some locations is wrong. Those locations seem to be the partition boundaries. If I solved a similar system without assigning any values, there seems to be no problem. So I deduce that it must be due to the assigning. There are several similar topics, but they all seem to be solving issues related to assigning value through the whole part, rather than on the partition boundaries.

I provide a minimum working example and figures obtained using Paraview to clarify the above comparison.

from fenics import *
import numpy as np

num_steps = 20
time = 0.008
dt = time/num_steps #s

def dummy_function(my_array):
    
    return np.exp(-np.einsum('ij,ij->i', my_array, my_array))

#read the mesh
mesh = UnitCubeMesh(MPI.comm_world, 10,10,10)
MeshTransformation.rescale(mesh, 1e-3, Point(0,0,0)) #in m

#define function space for v
V = FunctionSpace(mesh, "CG", 1)

#define the function and time
T = interpolate(Constant(0.0), V)
t = 0.00001
        
xdmf_file = XDMFFile(MPI.comm_world, "analytic/analytic.xdmf")
xdmf_file.parameters["flush_output"] = True
xdmf_file.parameters["functions_share_mesh"] = True
xdmf_file.parameters["rewrite_function_mesh"] = False

velocity = np.array([0.5,0,0])

#calculate anaytical temperature at each time step
for n in range(num_steps):
       
    T.vector().set_local(dummy_function(1000*(mesh.coordinates()[dof_to_vertex_map(V)]-np.array([0.0005,0.0005,0.001])-velocity*t)))

    T.rename("Temperature", "Analytical Temperature")
    xdmf_file.write(T, t)
    
    #increment time and recalculate time dependent functions
    t += dt

    print('{}% Complete'.format(round(100*n/num_steps,2)))



Is there anyone encountering a similar problem? Any help is appreciated. Thanks in advance!

Cheers,

rek

After manually editing the entries of a parallel vector, you need to update ghost values on neighboring MPI tasks. You can do this with the petsc4py API by running

    as_backend_type(T.vector()).vec().ghostUpdate()

immediately after set_local.

4 Likes