Hi Tomaubier,
I’ve implemented a read function that reads a nodal vector field f, provided together with the nodal coordinates x, y, z in a text file with the order
fx1 fy1 fz1 x1 y1 z1
fx2 fy2 fz2 x2 y2 z2
...
fxn fyn fzn xn yn zn
with the following function:
def readfunction(self, f, V, datafile, normalize=False):
# block size of vector
bs = f.vector.getBlockSize()
# load data and coordinates
data = np.loadtxt(datafile,usecols=(np.arange(0,bs)),ndmin=2)
coords = np.loadtxt(datafile,usecols=(-3,-2,-1)) # last three always are the coordinates
# new node coordinates (dofs might be re-ordered in parallel)
# in case of DG fields, these are the Gauss point coordinates
co = V.tabulate_dof_coordinates()
# index map
im = V.dofmap.index_map.global_indices()
tol = 1.0e-8
tolerance = int(-np.log10(tol))
# since in parallel, the ordering of the dof ids might change, so we have to find the
# mapping between original and new id via the coordinates
ci = 0
for i in im:
ind = np.where((np.round(coords,tolerance) == np.round(co[ci],tolerance)).all(axis=1))[0]
# only write if we've found the index
if len(ind):
if normalize:
norm_sq = 0.
for j in range(bs):
norm_sq += data[ind[0],j]**2.
norm = np.sqrt(norm_sq)
else:
norm = 1.
for j in range(bs):
f.vector[bs*i+j] = data[ind[0],j] / norm
ci+=1
f.vector.assemble()
# update ghosts
f.vector.ghostUpdate(addv=PETSc.InsertMode.INSERT, mode=PETSc.ScatterMode.FORWARD)
That works for me very efficiently in parallel also for large problems.
For other things such as restarting a simulation, I’m using PETSc viewer to write and read vectors, e.g.
viewer = PETSc.Viewer().createMPIIO('name.dat', 'w', self.comm)
variable.vector.view(viewer)
and
viewer = PETSc.Viewer().createMPIIO('name.dat', 'r', self.comm)
variable.vector.load(viewer)
variable.vector.ghostUpdate(addv=PETSc.InsertMode.INSERT, mode=PETSc.ScatterMode.FORWARD)
Hope this helps!
Best,
Marc