Dear community,
I found that the dolfinx.plot.vtk_mesh()
function crashes in parallel computing if the number of processes exceeds the number of mesh elements. While people seldom use more processes than the mesh elements, I would like to ask if there are any simple fixes for improving the stability of codes.
This issue seems to appear only in v0.9.0
of DOLFINx, and here is the MWE:
from mpi4py import MPI
import dolfinx
mesh = dolfinx.mesh.create_unit_square(MPI.COMM_WORLD, 1, 2,
dolfinx.mesh.CellType.quadrilateral)
dolfinx.plot.vtk_mesh(mesh)
The command mpirun -n 2 python3 test.py
works well, while mpirun -n 3 python3 test.py
raises the following errors:
[0]PETSC ERROR: ------------------------------------------------------------------------
[0]PETSC ERROR: Caught signal number 8 FPE: Floating Point Exception,probably divide by zero
[0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[0]PETSC ERROR: or see https://petsc.org/release/faq/#valgrind and https://petsc.org/release/faq/
[0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run
[0]PETSC ERROR: to get more information on the crash.
Abort(59) on node 0 (rank 0 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0