Bigger mesh kills kernel in Elasticity Demo

I am running the Elasticity Demo (https://github.com/hplgit/fenics-tutorial/blob/master/pub/python/vol1/ft06_elasticity.py). Note that you have to add: from ufl import nabla_div to get this to work with the latest fenics docker container.

I change line 24 from:

mesh = BoxMesh(Point(0, 0, 0), Point(L, W, W), 100, 30, 30)

to:

mesh = BoxMesh(Point(0, 0, 0), Point(L, W, W), 100, 30, 30)

This change crashes the kernel.

Question: How do I run fenics with larger meshes?

In which environment are you running dolfin?

I think the default solver when running in serial is UMFPACK which is notorious for a very low memory limit as default. Running in parallel will select MUMPS by default, which should use all memory available.

With regards to solving larger problems, you should consider iterative solvers, e.g. here cf. here.

2 Likes

I have the same problem with the undocumented 3D Pulley demo you linked, Nate.

I change this line:

mesh = Mesh()
XDMFFile(MPI.comm_world, "pulley.xdmf").read(mesh)

to

mesh = BoxMesh( Point(0.0, 0.0, 0.0), Point(10.0, 10.0, 10.0), 100, 100, 10)

Everything goes just fine. Then I make the mesh bigger:

mesh = BoxMesh( Point(0.0, 0.0, 0.0), Point(10.0, 10.0, 10.0), 100, 100, 100)

and the kernel crashes.

There are no error messages or anything so I’m having a hard time debugging.

Forgot to add: I am running fenics within a docker container.

BoxMesh is generated on a single process and then distributed to all other processes. It looks like you’re generating an extremely large (in terms of number of elements 100^3) mesh for a single process.

You should either:

  • Generate a coarser mesh, distribute to all processes, then refine that mesh, e.g.
mesh = BoxMesh(Point(0, 0, 0), Point(10, 10, 10), 10, 10, 10)  # Coarse mesh
mesh = refine(mesh, redistribute=True)

where the redistribute option indicates whether to rebalance the mesh across all processes. Use or don’t use that as you require. If you’re performing uniform refinement everywhere, you probably don’t need to redistribute. You can also refine the mesh as many times as you like. Figures 3 & 4 of this give some indication of what to expect in terms of mesh quality.

  • Read in a mesh from a scalable I/O scheme, e.g. XDMFFile/HDF5.
3 Likes

I got it to work by switching to an Anaconda-based environment rather than Docker container!

For those who are interested, this is how I got the conda environment setup:

conda create -n fenicsproject -c conda-forge mshr=2019.1.0=py38h2af9582_2 scipy

This is surprising since I assumed it would be a memory issue. Good to know it’s working. But do consider my earlier points if you move to extremely fine meshes.

You could also increase the share memory size in docker:

1 Like

When I use a docker container, the kernel frequently crashes. I allocated about 10GB of memory to docker but it still crashes. How can I check the cause of the error?

$ sudo docker stats

ONTAINER ID   NAME      CPU %     MEM USAGE / LIMIT     MEM %     NET I/O           BLOCK I/O         PIDS
b571806509a1   Fenicsx   0.00%     1.338GiB / 9.564GiB   13.99%    1.59MB / 11.1MB   1.02GB / 15.5MB   16

shared memory:

root@b571806509a1:~# df -h | grep shm
shm             5.0G     0  5.0G   0% /dev/shm

Without a minimal code example, including the error message, and how you initialized the docker container, it is hard to Give you any further guidance.