Hello, I am having trouble implementing PBC in parallel. My physical problem is a convection problem using the Stokes equation coupled with nonlinear scalar equations. I have worked with parallelization in closed domains in 2D, and it has worked very well in circular or square domains. I have also implemented the same physical problem in a 2D rectangular geometry (in serial) using periodic boundary conditions on the sides of the rectangle, and it has worked very well. But when I try to implement a 2D with periodic boundary conditions in a parallelized code, it does not work (for more than one core). It is as if the constraint for the slave nodes is not applied in the solution (mpc) for Stokes and for the scalars, which causes the scalar variables at the PBC boundaries to become 0 when visualizing the solution, while the correct solution does not appear in the fluid velocity. But if I use the same (parallelized) code for one core or in series, the physics and pbc work fine. Could you tell me what it could be? Thanks
Without a MWE, it is difficult to help you. E.g., I don’t know which FEniCs you’re using.
Are you using FEniCSx with the dolfinx_mpc package? If so, then I think you’re running into this issue Rewrite create sparsity pattern to be correct for non-square systems by jorgensd · Pull Request #216 · jorgensd/dolfinx_mpc · GitHub , which has been fixed and merged into the dolfinx_mpc package two weeks ago. Using the latest version should fix your issue.
Dear doctor Stein, thanks for your repply, im using Fenicxs, I already solve the problem, the problem was saving the info on the vtkfiles. As you see in the following fenicxs script, indise a temporal loop:
uh_sol=problemr.solve() uh_sol.x.scatter_forward() mpc_e.backsubstitution(uh_sol.x.petsc_vec) uh_sol.x.scatter_forward() uh_sol.x.petsc_vec.copy(uh.x.petsc_vec) uh.x.scatter_forward()
My current variable is uh, but i need to save uh_sol in order to plot correctly.
But now I have another problem that is the loading the checkpoint information in parallel. Im reading the data saved on a check point (*.h5) but when i call that function is unable to read and put the information on the variable u.
def load_checkpoint(path: Path, u,comm):
path_h5 = str(path)
meta_path = path_h5.replace(".h5", ".json")
meta = {}
if comm.rank == 0 and os.path.exists(meta_path):
with open(meta_path, "r") as f:
meta = json.load(f)
meta = comm.bcast(meta, root =0)
comm.barrier()
viewer = PETSc.Viewer().createHDF5(path_h5, mode=PETSc.Viewer.Mode.READ, comm=comm)
#
## ---- u ----
v = u.x.petsc_vec
v.setName("u")
v.load(viewer)
viewer.destroy()
u.x.scatter_forward()
return meta
When i check the *.h5 i found that is full of the propper data. So i dont kn ow how to read those kind of data in Fenicsx using a parallel scheme, can you suggest me some solution or what is the wrong thing i’m doing there? Thanks
Checkpointing is best done through adios4dolfinx, especially if you are writing out on one set of processors and want to read it in on another set of processors.
See:
Note that I’m currently making a rather dramatic rewrite to increase the capabilities of adios4dolfinx at:General backend support by jorgensd · Pull Request #193 · jorgensd/adios4dolfinx · GitHub