Hello,
I recently installed fenicsx with dolfinx 0.9.0.0. I’m working on a 3D problem, where I read a gmsh file with the gmshio.read_from_msh command. The definition of the variational problem is working and the visualization in paraview works as tought. To speed up the simulation I do some parallelization, which again works out of the box. Now I want to add a step and evaluate the nodal values and coordinates of a surface, which i defined via PhysicalGroup in gmsh.
The following code snipped runs perfectly on a single core:
facets = facet_tags.find(surf_tag)
indices = mesh.entities_to_geometry(domain, domain.topology.dim-1, facets, False)
indices = np.unique(indices)
coordinates = domain.geometry.x[indices]
vertex_realvalues = E.x.array[indices].real
vertex_imagvalues = E.x.array[indices].imag
outpfile = open('results/ScatterE_1CPU.dat','w')
outpfile.write('# %11s %22s %22s %22s %22s\n'%('x','y','z','E_real','E_imag'))
for coord, Erealval, Eimagval in zip(coordinates, vertex_realvalues, vertex_imagvalues):
outpfile.write('%11.15f %22.15f %22.15f %22.15f %22.15f\n'%(coord[0], coord[1], coord[2], Erealval, Eimagval))
outpfile.close()
However, when parallelizing this, I’m facing errors with the message, division by zero. I’ve read, that its related to a bug in mesh.entities_to_geometry which is fixed in dolfinx 0.10.0.0, which is not yet in the apt repositories to install. Is there a workaround for now? I figured, that due to the parallelization, the facets variable may be empty, hence I followed with a allCPUfacets = domain.comm.gather(facets,root=0) command, which is later reduced to a list again via
facetslistoflist = [list(nodeid) for nodeid in allCPUfacets if list(nodeid)]
facetslist = [nodeid for nodeidlist in facetslistoflist for nodeid in nodeidlist]
facets = np.asarray(facetslist)
on MPI.COMM_WORLD.Get_rank() == 0 . This procedure is repeated with the indices variable and so forth, but did not reach a satisfactory result. With the mesh.compute_incident_entities command the division by zero error is not exiting the program but the indices then point to different nodes, since the evaluated coordinates are not on a plane anymore.
Can anyone point me towards the correct evaluation of coordinates and values of nodes for parallelized objects? Maybe a .gather or .receive combined with the evaluation on a single core is all I need since it’s just a post-processing step. But I’m fairly new to fenics and mpi4py and I seem to struggle to adapt from old dolfin, when I find some clues in the forum.
Thanks in advance.