Super slow code (3D elasticity problem)

Hi
I have a 3D elasticity problem. I created my mesh in GMSH. Here is a the code:

from dolfin import *
mesh = Mesh("B.xml")
facets = MeshFunction("size_t", mesh, "B_facet_region.xml")

ds = Measure("ds", subdomain_data=facets)
n = FacetNormal(mesh)
V = VectorFunctionSpace(mesh, 'CG', degree=2)

E = 5e6
nu =0.3
F1 = 2000.
F2 = 2000.

mu = E/2/(1+nu)
lmbda = E*nu/(1+nu)/(1-2*nu)

def eps(v):
    return sym(grad(v))

def sigma(v):
    return lmbda * tr(eps(v)) * Identity(3) + 2.0 * mu * eps(v)

du = TrialFunction(V)
u_ = TestFunction(V)
a = inner(sigma(du), eps(u_)) * dx

# Defining Linear form
l = inner(-F1 * n, u_) * ds(7) + inner(-F2 * n, u_) * ds(6) +  inner(-F2 * n, u_) * ds(5)

bc_1 = DirichletBC(V, Constant((0., 0.,0.)), facets, 1)
bc_2 = DirichletBC(V, Constant((0., 0.,0.)), facets, 2)
bc_all = [bc_1,bc_2]

u = Function(V, name="Displacement")
solve(a == l, u, bc_all,solver_parameters={'linear_solver':'mumps'})

I solved exactly the same problem with the same mesh and boundary conditions with another software (It took 30 s) while solving it with FEniCS takes almost 120s. Does anybody know what I can do to improve the performance of the code to make it faster?
Thanks!

Hi,
it will strongly depend on the solver used for the linear system. If your other software uses Krylov solvers for instance with a good preconditioner it can easily explain the difference.

1 Like

Thanks Jeremy for your response.
I used ANSYS to solve it. I do not know if it uses any preconditioner by default. My geometry is something like half of sphere clamped at the top and under internal pressure. I used to get Runtime Error before when I did not have : solver_parameters={'linear_solver':'mumps'} in the solver.
Now the Runtime Error is resolved but it is still very slow. It is a linear elasticity problem as you can see. What preconditioner do you think may worth trying for such a problem?

I modified your example to illustrate the effect of switching between direct and iterative solvers. Try changing DIRECT between True and False in the following example:

from dolfin import *
#mesh = Mesh("B.xml")
#facets = MeshFunction("size_t", mesh, "B_facet_region.xml")
#ds = Measure("ds", subdomain_data=facets)
Nel = 20
mesh = UnitCubeMesh(Nel,Nel,Nel)

n = FacetNormal(mesh)
V = VectorFunctionSpace(mesh, 'CG', degree=2)

E = 5e6
nu =0.3
F1 = 2000.
F2 = 2000.

mu = E/2/(1+nu)
lmbda = E*nu/(1+nu)/(1-2*nu)

def eps(v):
    return sym(grad(v))

def sigma(v):
    return lmbda * tr(eps(v)) * Identity(3) + 2.0 * mu * eps(v)

du = TrialFunction(V)
u_ = TestFunction(V)
a = inner(sigma(du), eps(u_)) * dx

# Defining Linear form
#l = inner(-F1 * n, u_) * ds(7) + inner(-F2 * n, u_) * ds(6) +  inner(-F2 * n, u_) * ds(5)
l = inner(Constant((1,0,0)),u_)*dx

#bc_1 = DirichletBC(V, Constant((0., 0.,0.)), facets, 1)
#bc_2 = DirichletBC(V, Constant((0., 0.,0.)), facets, 2)
#bc_all = [bc_1,bc_2]
bc_all = [DirichletBC(V,Constant((0,0,0)),"on_boundary"),]

u = Function(V, name="Displacement")

DIRECT = False
if(DIRECT):
    solve(a == l, u, bc_all,solver_parameters={'linear_solver':'mumps'})
else:
    linearSolver = PETScKrylovSolver("cg","jacobi")
    #linearSolver.parameters['relative_tolerance'] = 1e-3
    A = assemble(a)
    B = assemble(l)
    for bc in bc_all:
        bc.apply(A,B)
    linearSolver.solve(A,u.vector(),B)

Even with the simplest Jacobi preconditioner, I see a more than 5x speedup with DIRECT = False. You can go even faster by increasing the relative tolerance above the default, which is likely overly-conservative for many applications. (See the commented-out line in the else: block.)

4 Likes

Hi David
I tried what you suggested and here are some of the results:
1- Direct Solver:
Time: 153 seconds /// Maximum Displacement: 3.32\times 10^-2

2 - Iterative solver:
linearSolver = PETScKrylovSolver("cg","jacobi")
linearSolver.parameters['relative_tolerance'] = 1e-3
Time: 73.06 seconds /// Maximum Displacement: 3.32\times 10^-2

3 - Iterative solver:
linearSolver = PETScKrylovSolver("gmres","jacobi")
linearSolver.parameters['relative_tolerance'] = 1e-3
Time: 50 seconds /// Maximum Displacement: 3.12\times 10^-2

4 - Iterative solver:
linearSolver = PETScKrylovSolver("gmres","jacobi")
linearSolver.parameters['relative_tolerance'] = 1e-2
Time: 17.4 seconds /// Maximum Displacement: 2.67\times 10^-2


The results were quite interesting particularly the last one. Using Krylov Solver instead of Direct along with “gmres” and “jacobi” preconditioner and increasing the relative tolerance to 1 order of magnitude improved the speed near 10 times! although the results changed a little bit. Thanks for your help.

A relative tolerance of 1e-3 is very large.

Your operator looks symmetric too, so conjugate gradient should outperform GMRES.

The sparsity pattern of the structured cube with tets exhibits extremely poor performance with multifrontal solvers. Have you tried superlu_dist rather than mumps?

Finally, excluding geometric multigrid methods, I find CG with a smoothed aggregation AMG preconditioner to perform the best (in addition to being scalable). There is a demo.

See this paper, for example, on its application to turbomachinery.

2 Likes

I am using 2017.2.0 version and I do not have superlu_dist among the solvers.
But regarding the KrylovSolver:
I played with the solver to compare cg and gmres methods. I agree with you that 1e-3 is very large. I found that when relative tolerance is large (e.g. > 1e-3) gmres outperforms cg (17 seconds of simulation time for gmres compared to 30 seconds for cg).
In the next step I decreased the relative tolerance to 1e-8. This time the simulation time for cg took 73 seconds. Interestingly with gmres after almost 200 seconds I got this error:
Reason: Solution failed to converge in 10000 iterations (PETSc reason DIVERGED_ITS, residual norm ||r|| = 1.252324e-06).
As you pointed out, this problem is symmetric so cg should work better but it turned out to be right when relative tolerance is small enough.