ValueError: vector::_M_default_append

I am testing weak scalability of FEniCS parallel computing. So I have to set a very fine mesh as I increase the number of cores used. In this case, the mesh is a square [-128,128] \times [-128,128], with 20000 number of cells both horizontally and vertically (20000*20000 mesh). I am using 64 cores, each with memory 20g.I think the memory should be enough. The issue is that I kept getting the following error:
image
Any hints or ideas are appreciated.
Thank you!
The code is attached here.

Does this error occur for smaller meshes? (What is the smallest mesh you can use to obtain this error?)
I was able to run:

from dolfin import *                                                            
mesh = RectangleMesh(Point(-128,-128), Point(128, 128), 15000,15000)            

on a system with 50 procs and 256GB ram.
However, for 20 000 x 20 000, I am able to reproduce your error.
My bet is that the built in mesh initializer is not made for performing HPC.
You could try to create a smaller mesh and run refine on it (its gonna be more memory consuming, but it seems like you should have sufficient memory:

from dolfin import *
mesh = RectangleMesh(Point(-128,-128), Point(128, 128), 10000,10000)
if MPI.rank(MPI.comm_world)==1:
    print(mesh.num_cells())
mesh2 = refine(mesh)
if MPI.rank(MPI.comm_world)==1:
    print(mesh2.num_cells())

(I do not have sufficient RAM on my system to run the refine procedure).

Yes. Thank you. Refining the mesh solves the problem. I did a few tests as well, and for your information, the max mesh number for the built in mesh initializer is between 16250(ok) and 16875(error).
So for weak scalability test, when we need the mesh to be very fine, do you happen to know any other ways of initializing the mesh? Since I see people are carrying out tests on thousands of or even more cores, I am curious how they generate the meshes.
Again, thank you for the answer.

Well, there are many mesh generation software out there. For most of my problems, I use Gmsh/pygmsh. To do scalability studies, i use the refine command to increase the size of the problem, rather than regenerating the whole mesh for each problem.

Depending on the application, usually built-in meshes doesn’t suffice to describe the geometries of interest. Therefore, in dolfin, you can load in meshes in parallel if they are in the XDMF-format. See for instance: Transitioning from mesh.xml to mesh.xdmf, from dolfin-convert to meshio on how to convert msh files (from Gmsh) to xdmf.

Thank you. This is helpful.