H(curl) element in fenics UMFPACK V5.7.8 (Nov 9, 2018): ERROR: out of memory

Hello Folks,

I have a question regarding N1curl elements. I have a piece of code in which I am using N1curl for solving Maxwell equations.

I have developed my code on Mac and it works completely fine, however, when I want to use it in a Linux cluster machine, I encounter this problem:

UMFPACK V5.7.8 (Nov 9, 2018): ERROR: out of memory

I noticed that the problem is when I am using order = 2 in the following line of my code.

V = FiniteElement('N1curl', msh.ufl_cell(), order)

I also add lots of memory to my SBATCH file but it didn’t work again.

Again, the code works fine on my Mac and this problem arises only in the cluster.
Could you please give me some tips to solve this problem?

Many thanks and looking forward to seeing your kind help.

Best,
Ali

You would need to supply a minimal example reproducing the error.
The error is likely when solving the problem, and could be related to: UMFPACK error: out of memory despite system having free memory - #2 by plugged

Hi and Thank you Jorgen for your reply. Honestly, I don’t know how to make a minimal example because my code is huge and there are several parts working together. But what I can say based on my observations are as follow:

  1. The code works fine with a reasonable time on my Laptop (MAC os),
  2. The code doesn’t work in the Centos Linux cluster with a similar configuration.
  3. I don’t use mpirun to run my code
  4. I am using “mumps” in my solvers.
  5. When I use order=1 in FiniteElement(‘N1curl’, msh.ufl_cell(), order) I can see the program runs but very slow.
  6. When I use order =2 and reduce mesh considerably, the UMFPACK error has not appeared during 12 hours run.
  7. similar config in my laptop with 10 cores takes around 45 min.

maybe this piece of information helps:

list_lu_solver_methods()
LU method     |  Description                                                 
-----------------------------------------------------------------------------
default           |  default LU solver                                           
mumps             |  MUMPS (MUltifrontal Massively Parallel Sparse direct Solver)
petsc             |  PETSc built in LU solver                                    
superlu           |  SuperLU                                                     
superlu_dist      |  Parallel SuperLU                                            
umfpack           |  UMFPACK (Unsymmetric MultiFrontal sparse LU factorization)  

I hope you can help me understand the problem. I know without a minimal example, it would be hard to reproduce the problem.

Thanks again.