Cannot create mesh in parallel

I am trying to use the following MWE to create a mesh.

from mpi4py import MPI
from dolfinx import mesh

domain = mesh.create_rectangle(MPI.COMM_WORLD, [[0, 0], [60, 20]],
                               [1500, 500], mesh.CellType.triangle),

I can only create the mesh using a small amount of processes (smaller than 16 in this case). If we use more than 16 processes, the following error appears:

Loguru caught a signal: SIGBUS
Stack trace:
45      0x5596b4794245 _start + 37
44      0x7f0db04e5e40 __libc_start_main + 128
43      0x7f0db04e5d90 /lib/x86_64-linux-gnu/libc.so.6(+0x29d90) [0x7f0db04e5d90]
42      0x5596b479434d Py_BytesMain + 45
41      0x5596b47be0ae Py_RunMain + 702
40      0x5596b47cce33 _PyRun_AnyFileObject + 67
39      0x5596b47cd138 _PyRun_SimpleFileObject + 424
38      0x5596b47cdc55 python3(+0x265c55) [0x5596b47cdc55]
37      0x5596b47c6d5b python3(+0x25ed5b) [0x5596b47c6d5b]
36      0x5596b47cdf08 python3(+0x265f08) [0x5596b47cdf08]
35      0x5596b47a1456 PyEval_EvalCode + 134
34      0x5596b46a9766 python3(+0x141766) [0x5596b46a9766]
33      0x5596b46b2a72 _PyEval_EvalFrameDefault + 24882
32      0x5596b46c43ac _PyFunction_Vectorcall + 124
31      0x5596b46b348e _PyEval_EvalFrameDefault + 27470
30      0x5596b46ba7db _PyObject_MakeTpCall + 603
29      0x5596b46c3b5e python3(+0x15bb5e) [0x5596b46c3b5e]
28      0x7f0da02ecb20 /usr/local/dolfinx-real/lib/python3.10/dist-packages/dolfinx/cpp.cpython-310-x86_64-linux-gnu.so(+0x79b20) [0x7f0da02ecb20]
27      0x7f0da044d101 /usr/local/dolfinx-real/lib/python3.10/dist-packages/dolfinx/cpp.cpython-310-x86_64-linux-gnu.so(+0x1da101) [0x7f0da044d101]
26      0x7f0da01b7e29 dolfinx::mesh::create_rectangle(int, std::array<std::array<double, 2ul>, 2ul> const&, std::array<unsigned long, 2ul>, dolfinx::mesh::CellType, std::function<dolfinx::graph::AdjacencyList<int> (int, int, int, dolfinx::graph::AdjacencyList<long> const&)> const&, dolfinx::mesh::DiagonalType) + 137
25      0x7f0da01b7309 /usr/local/dolfinx-real/lib/libdolfinx.so.0.6(+0x184309) [0x7f0da01b7309]
24      0x7f0da01993d8 dolfinx::mesh::create_mesh(int, dolfinx::graph::AdjacencyList<long> const&, dolfinx::fem::CoordinateElement const&, std::span<double const, 18446744073709551615ul>, std::array<unsigned long, 2ul>, std::function<dolfinx::graph::AdjacencyList<int> (int, int, int, dolfinx::graph::AdjacencyList<long> const&)> const&) + 328
23      0x7f0da0426f21 /usr/local/dolfinx-real/lib/python3.10/dist-packages/dolfinx/cpp.cpython-310-x86_64-linux-gnu.so(+0x1b3f21) [0x7f0da0426f21]
22      0x7f0da0444026 /usr/local/dolfinx-real/lib/python3.10/dist-packages/dolfinx/cpp.cpython-310-x86_64-linux-gnu.so(+0x1d1026) [0x7f0da0444026]
21      0x5596b476e0ff PyObject_CallObject + 191
20      0x5596b46c3b5e python3(+0x15bb5e) [0x5596b46c3b5e]
19      0x7f0da02ecb20 /usr/local/dolfinx-real/lib/python3.10/dist-packages/dolfinx/cpp.cpython-310-x86_64-linux-gnu.so(+0x79b20) [0x7f0da02ecb20]
18      0x7f0da04422cd /usr/local/dolfinx-real/lib/python3.10/dist-packages/dolfinx/cpp.cpython-310-x86_64-linux-gnu.so(+0x1cf2cd) [0x7f0da04422cd]
17      0x7f0da0427021 /usr/local/dolfinx-real/lib/python3.10/dist-packages/dolfinx/cpp.cpython-310-x86_64-linux-gnu.so(+0x1b4021) [0x7f0da0427021]
16      0x7f0da01cee3c /usr/local/dolfinx-real/lib/libdolfinx.so.0.6(+0x19be3c) [0x7f0da01cee3c]
15      0x7f0da01ced78 /usr/local/dolfinx-real/lib/libdolfinx.so.0.6(+0x19bd78) [0x7f0da01ced78]
14      0x7f0da0426bb9 /usr/local/dolfinx-real/lib/python3.10/dist-packages/dolfinx/cpp.cpython-310-x86_64-linux-gnu.so(+0x1b3bb9) [0x7f0da0426bb9]
13      0x7f0da01328bc dolfinx::graph::partition_graph(int, int, dolfinx::graph::AdjacencyList<long> const&, bool) + 124
12      0x7f0da01307ba /usr/local/dolfinx-real/lib/libdolfinx.so.0.6(+0xfd7ba) [0x7f0da01307ba]
11      0x7f0da012f13f /usr/local/dolfinx-real/lib/libdolfinx.so.0.6(+0xfc13f) [0x7f0da012f13f]
10      0x7f0da01e45fb _SCOTCHdgraphCheck + 4427
9       0x7f0da98e6159 PMPI_Waitall + 233
8       0x7f0da9aa1ed9 /usr/local/lib/libmpi.so.12(+0x332ed9) [0x7f0da9aa1ed9]
7       0x7f0da9aa1a05 /usr/local/lib/libmpi.so.12(+0x332a05) [0x7f0da9aa1a05]
6       0x7f0da9a9d275 /usr/local/lib/libmpi.so.12(+0x32e275) [0x7f0da9a9d275]
5       0x7f0da9a9b09d /usr/local/lib/libmpi.so.12(+0x32c09d) [0x7f0da9a9b09d]
4       0x7f0da9b4566b /usr/local/lib/libmpi.so.12(+0x3d666b) [0x7f0da9b4566b]
3       0x7f0da9a63d1a /usr/local/lib/libmpi.so.12(+0x2f4d1a) [0x7f0da9a63d1a]
2       0x7f0da9a63741 /usr/local/lib/libmpi.so.12(+0x2f4741) [0x7f0da9a63741]
1       0x7f0db065cb00 /lib/x86_64-linux-gnu/libc.so.6(+0x1a0b00) [0x7f0db065cb00]
0       0x7f0db04fe520 /lib/x86_64-linux-gnu/libc.so.6(+0x42520) [0x7f0db04fe520]
2023-09-01 15:45:42.088 (   1.238s) [main            ]                       :0     FATL| Signal: SIGBUS

===================================================================================
=   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
=   PID 12066 RUNNING AT 40c2c568e6aa
=   EXIT CODE: 9
=   CLEANING UP REMAINING PROCESSES
=   YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
YOUR APPLICATION TERMINATED WITH THE EXIT STRING: Terminated (signal 15)
This typically refers to a problem with your application.
Please see the FAQ page for debugging suggestions

The software version is:

DOLFINx version: 0.6.0 based on GIT commit: 24f86a9ce57df6978070dbee22b3eae8bb77235f of https://github.com/FEniCS/dolfinx/

I have checked that there are plenty of available memories in both the host machine and WSL2. I would like to know how to solve this issue.

I tried an earlier version of FEniCSx on the same machine. The version information is

DOLFINx version: 0.3.1.0 based on GIT commit: 1d2fc1a4348d2c475f36ee1556e0cb57387f0f80 of https://github.com/FEniCS/dolfinx/

Interestingly, I now can run the code with at most 22 processes with a similar MWE:

from mpi4py import MPI
from dolfinx.generation import RectangleMesh
from dolfinx.mesh import CellType

domain = RectangleMesh(MPI.COMM_WORLD, [[0, 0, 0], [60, 20, 0]],
                       [1500, 500], CellType.triangle),

However, using more processes still involves the error:

===================================================================================
=   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
=   PID 4653 RUNNING AT 2669c64a2d5e
=   EXIT CODE: 7
=   CLEANING UP REMAINING PROCESSES
=   YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
YOUR APPLICATION TERMINATED WITH THE EXIT STRING: Terminated (signal 15)
This typically refers to a problem with your application.
Please see the FAQ page for debugging suggestions

I fixed the issue by following: Unable to read 3D mesh when running script in parallel - #2 by dokken.

Basically, we need to enlarge the shared memory when creating a Docker container because the default shared memory is 64MB.