I am trying to assign value to fenics function with mpirun.
Here is the minimal code -
import numpy as np
from fenics import *
mesh = UnitSquareMesh(3, 3)
V = FunctionSpace(mesh, 'Lagrange', 1)
u = Function(V)
print("u before assignment", u.vector()[:])
x = np.arange(0, 16)
print("to be assigned", x)
u.vector()[:] = x[:]
print("u after assignment", u.vector()[:])
with mpirun -n 2 python3 mwe.py this is the error -
*** Error: Unable to set local values of PETSc vector.
*** Reason: Size of values array is not equal to local vector size.
*** Where: This error was encountered inside PETScVector.cpp.
*** Process: 0
When you run in parallel, your mesh is distributed. The same is the case for the degrees of freedom, meaning that no process contains all degrees of freedom.
You would have to find a way of identifying what dofs exist on each process (for instance by tabulating the dof coordinates), and then only assign those local to the process.
I tried like this but u with one process looks different than multiprocess
import numpy as np
from fenics import *
comm = MPI.comm_world
rank = MPI.rank(comm)
mesh = UnitSquareMesh(3, 3)
V = FunctionSpace(mesh, 'Lagrange', 1)
u = Function(V)
vec = u.vector()
values = vec.get_local()
print("u before assignment", u.vector()[:])
x = np.arange(0, 16)
dofmap = V.dofmap()
my_first, my_last = dofmap.ownership_range() # global
visited = []
for cell in cells(mesh):
dofs = dofmap.cell_dofs(cell.index()) # local
for dof in dofs:
if not dof in visited:
global_dof = dofmap.local_to_global_index(dof) # global
if my_first <= global_dof < my_last:
visited.append(dof)
if rank > 0:
k = dof + my_first
else:
k = dof
values[dof] = x[k]
vec.set_local(values)
vec.apply('insert')
print(f"rank = {rank}", u.vector()[:])
u before assignment [0. 0.]
u before assignment [0. 0. 0. 0.]
u before assignment [0. 0. 0. 0. 0.]
u before assignment [0. 0. 0. 0. 0.]
rank = 0 [0. 1.]
rank = 1 [2. 3. 4. 5. 6.]
rank = 3 [11. 12. 13. 14. 15.]
rank = 2 [ 7. 8. 9. 10.]
Are you assuming that the dofs are ordered in the same way with any number of processes?
What I stated before is that when things are distributed, you need to use something that is specific to the dof to locate it, say the dof_coordinates, that you can get with V.tabulate_dof_coordinates(), that would tell you where in the mesh the dof is located. Using the global index, and assuming it is fixed with any number of process, is the wrong assumption.