Hi there,
I’m trying to define a Function by setting the values at the DOFs. So to say, I want to define u on the unit cube as
I used the following code:
from mpi4py import MPI
import numpy as np
from dolfinx import mesh, io
from dolfinx.fem import FunctionSpace,Function
from petsc4py import PETSc
nt, nx, ny = 20, 40, 40
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
msh = mesh.create_box(comm, points=np.array([[0,0,0], [1,1, 1]]), n = [nt, nx, ny], ghost_mode=mesh.GhostMode.shared_facet)
W = FunctionSpace(msh, ("Lagrange", 1))
dof_coord = W.tabulate_dof_coordinates()
u = Function(W)
for k in range(u.vector.array[:].size):
if max(abs(dof_coord[k][1] - 0.7), abs(dof_coord[k][2] - 0.3)) < 0.2:
u.vector.array[k] = 2.0
u.vector.ghostUpdate(addv=PETSc.InsertMode.INSERT, mode=PETSc.ScatterMode.FORWARD)
u.x.scatter_forward()
with io.XDMFFile(msh.comm, "u.xdmf", "w") as xdmf:
xdmf.write_mesh(msh)
xdmf.write_function(u)
But I get different results when using MPI. If I run mpirun -n 1 python3 test.py
, it works perfect. Below the images of u(0.1,x,y) and u(0.5,x,y), which look perfect:
[slice 1]
[slice 2]
But If I run mpirun -n 4 python3 test.py
, the same slices look like follows:
[slice 1]
[slice 2]
I guess the problem is related to ghost points, but don’t know how to correct this issue. I can see that the lengths of dof_coord and that of u.vector.array[:] are different when running in parallel, while they are the same when specifying -n 1. So, if I’m to define a Function as above by specifying values at the DOFs, how should I do when running in parallel?