I have a slightly strange issue at the moment regarding parallelism within Fenics. I’m working on an adaptive re-meshing scheme and I’m using the coordinates of a contour (generated through matplotlib) to control the adaptivity. When run in serial, the code below returns an ordered list of coordinates that I can then use to generate a mesh using gmsh. However, when run in parallel this doesn’t work as the mesh is distributed across cores.
from fenics import * from mpi4py import MPI import matplotlib.pyplot as plt comm = MPI.COMM_WORLD rank = comm.rank if rank == 0: nx = 5 mesh= RectangleMesh(Point(0,0),Point(1,2),nx,2*nx) V = FunctionSpace(mesh, 'CG', 2) eps = Constant(0.5*mesh.hmin()**0.9) dist = Expression('sqrt((pow((x-A),2))+(pow((x-B),2)))-r', degree=2, A=0.5,B=0.5,r=0.25) dist2 = Expression('(1/(1+exp((dist/eps))))', degree=2, eps=eps, dist=dist) phi0 = interpolate(dist2, V) cs = plot(phi0 , mode='contour', levels=[0.5]) plt.close() a= for item in cs.collections: for i in item.get_paths(): a.append(i.vertices) else: a = None coords = comm.bcast(a, root = 0)
The code above is what I would like to have working, but it just seems to freeze when ran in the terminal. The contour coordinates are calculated on one processor so they are in an ordered list, then broadcast to all other cores so the rest of the code can run in parallel.