Dofs value assignment problem in Parallel

Hi,

I was doing tests for the dofs mapping when i noticed that when i apply a Function value directly to the function vector and run the code in parallel, multiple dofs get that value assigned.

Here is a mwe

from dolfin import *
from mpi4py import MPI as pyMPI

comm = MPI.comm_world
rank = comm.Get_rank()

# Mesh ------------------------------
mesh = UnitSquareMesh(5,5)

# Function spaces -----------------------

V_scalar = FiniteElement("Lagrange", mesh.ufl_cell(), 1)
V = FunctionSpace(mesh, V_scalar)
u = Function(V)
u.vector()[0] = 2
File('u.pvd') << u

which gives me the following output with 3 cores

and this one with 5 cores

is there a way to avoid this issue? I am using the latest FEniCS version

You are now editing dof 0 on every processor, as dofs are partitioned over the different processors. What would you like to achieve by setting the value if dof 0? Is this for a specific spatial coordinate in the mesh?
i.e. dof(0)=dof at (0,0) in serial?

1 Like

Hi, thank you for the reply.

I am setting the specific value to the dofs in order to copy the solution of a displacement problem obtained from a coarse global mesh on the boundary of a local submesh (via parent-child and vertex-dofs maps as it is faster compared to using interpolations).

The full code is running fine in serial but the idea was to make it work in parallel (using MeshView in order to create the submesh in parallel).

Please supply a minimal example of how your function works in serial.
As I said above, the dof numbering in serial and parallel differs, as every processor has dof numbering:
(0, N_i), i=1,…num_procs.
You can of course get the global dof numbering of each local process (using FunctionSpace->DofMap->IndexMap->local_to_global), however, I would strongly suggest that you use local insertions, I tend to use

 u.vector().vec().setValueLocal(index, value)
1 Like

Here is a mwe of the serial code copying the values from mesh1 to mesh2 (its submesh) for a scalar field

from dolfin import *
from mpi4py import MPI as pyMPI
comm = MPI.comm_world
rank = comm.Get_rank()

mesh = UnitSquareMesh(5,5)

mf = MeshFunction('size_t', mesh, 2, 0)
for n, c in enumerate(cells(mesh)):
    if c.midpoint().x() < 0.5:
        mf[n] = 1

mesh2 = SubMesh(mesh, mf, 1)
vmap = mesh2.data().array('parent_vertex_indices', 0)

V_scalar = FiniteElement("Lagrange", mesh.ufl_cell(), 1)

V = FunctionSpace(mesh, V_scalar)
u = Function(V)
v2dof_u = vertex_to_dof_map(V)

V2 = FunctionSpace(mesh2, V_scalar)
u2 = Function(V2)
v2dof_u2 = vertex_to_dof_map(V2)

u.vector()[0] = 2
u2.vector()[v2dof_u2] = u.vector()[v2dof_u[vmap]]

File('u.pvd') << u
File('u2.pvd') << u2

I will try the local insertion as you suggested in my code to try to make it work in parallel.

2 Likes

The only problem here, is that u.vector(0)[2] is not uniquely defined. Why do you want to set the zeroth dof in serial to 2?

Hi @dokken, I have a question regarding your statement, “the dof numbering in serial and parallel differs”.

Is there a way to get a mapping between the serial dof numbering and the parallel dof numbering? For example, for a specific dof in a fenics simulation, let’s say that dof is number 20 in a serial simulation, but it is number 51 in a parallel simulation. Is there a way to get a mapping between the parallel dof numbering to the serial dof numbering (i.e. 51 -> 20)? Is this related to your comment of:

“”"
You can of course get the global dof numbering of each local process (using FunctionSpace->DofMap->IndexMap->local_to_global), however, I would strongly suggest that you use local insertions, I tend to use

 u.vector().vec().setValueLocal(index, value)

“”"