Problem with Chan-Hilliard demo and MPI command

Hi everyone,
I’m testing the Chan-Hilliard demo (5. Cahn-Hilliard equation — FEniCS Project)
but I am experiencing problems. In particular i obtain this error message:

line 35, in __init__
    random.seed(2 + MPI.rank(mpi_comm_world()))
NameError: name 'mpi_comm_world' is not defined

How can i solve it?
Thanks in advance

You are looking at very old documentation (v 1.4.0), the latest demo is available at: Cahn-Hilliard equation — DOLFIN documentation

2 Likes

Thanks for the advice,
I am now trying to modify it to fit a problem on a 3d domain [0,1][0,1][0,1].
For the moment I have tried the following changes:
-line 124 I added values [2] = 0.0
-line 126 return (3,)
-line 197 mesh = UnitCubeMesh.create (96, 96, 96, CellType.Type.tetrahedron)
-line 199 ME = FunctionSpace (mesh, P1 * P1 * P1)

I’m having trouble with interpolation lines 240/241/242 and later with function definition lines 257

Please add a minimal example with what you have so far, and what error-message you are seeing,

This is my modified code:

import random
from dolfin import *

class InitialConditions(UserExpression):
    def __init__(self, **kwargs):
        random.seed(2 + MPI.rank(MPI.comm_world))
        super().__init__(**kwargs)
    def eval(self, values, x):
        values[0] = 0.63 + 0.02*(0.5 - random.random())
        values[1] = 0.0
        values[2] = 0.0
    def value_shape(self):
        return (3,)

class CahnHilliardEquation(NonlinearProblem):
    def __init__(self, a, L):
        NonlinearProblem.__init__(self)
        self.L = L
        self.a = a
    def F(self, b, x):
        assemble(self.L, tensor=b)
    def J(self, A, x):
        assemble(self.a, tensor=A)
lmbda  = 1.0e-02  # surface parameter
dt     = 5.0e-06  # time step
theta  = 0.5    

parameters["form_compiler"]["optimize"]     = True
parameters["form_compiler"]["cpp_optimize"] = True

mesh = UnitCubeMesh.create(96, 96, 96, CellType.Type.tetrahedron)
P1 = FiniteElement("Lagrange", mesh.ufl_cell(), 1)
ME = FunctionSpace(mesh, P1*P1*P1)

du    = TrialFunction(ME)
q, v  = TestFunctions(ME)

u   = Function(ME)  # current solution
u0  = Function(ME) 

dc, dmu = split(du)
c,  mu  = split(u)
c0, mu0 = split(u0)

u_init = InitialConditions(degree=1)
u.interpolate(u_init)
u0.interpolate(u_init)

c = variable(c)
f    = 100*c**2*(1-c)**2
dfdc = diff(f, c)

mu_mid = (1.0-theta)*mu0 + theta*mu

L0 = c*q*dx - c0*q*dx + dt*dot(grad(mu_mid), grad(q))*dx
L1 = mu*v*dx - dfdc*v*dx - lmbda*dot(grad(c), grad(v))*dx
L = L0 + L1

a = derivative(L, u, du)

problem = CahnHilliardEquation(a, L)
solver = NewtonSolver()
solver.parameters["linear_solver"] = "lu"
solver.parameters["convergence_criterion"] = "incremental"
solver.parameters["relative_tolerance"] = 1e-6

file = File("output.pvd", "compressed")

# Step in time
t = 0.0
T = 50*dt
while (t < T):
    t += dt
    u0.vector()[:] = u.vector()
    solver.solve(problem, u.vector())
    file << (u.split()[0], t)

And this is the error message:

Can't add expressions with different shapes.
Traceback (most recent call last):
  File "/Users/lorenzomarta/Desktop/Simulazioni_demo_fenics/Cahn-Hilliard/demo_cahn-hilliard.py", line 258, in <module>
    f    = 100*c**2*(1-c)**2
  File "/Applications/anaconda3/envs/fenicsproject/lib/python3.10/site-packages/ufl/exproperators.py", line 243, in _rsub
    return Sum(o, -self)
  File "/Applications/anaconda3/envs/fenicsproject/lib/python3.10/site-packages/ufl/algebra.py", line 53, in __new__
    error("Can't add expressions with different shapes.")
  File "/Applications/anaconda3/envs/fenicsproject/lib/python3.10/site-packages/ufl/log.py", line 172, in error
    raise self._exception_type(self._format_raw(*message))
ufl.log.UFLException: Can't add expressions with different shapes.

It seems that you are misunderstanding the usage of the functionspace ME

The number of spaces here has nothing to do with the dimension of your mesh, but the underlying unknowns in your PDE.

The only thing you need to change to solve the Cahn-Hilliard equation on a 3D structure is the mesh generation:

mesh = UnitCubeMesh.create(10, 10, 10, CellType.Type.tetrahedron)

all the other code can stay as it is in the demo

1 Like

Sure, I made a big mistake!!!
Following your suggestion I get this other kind of problem

 File "/Users/lorenzomarta/Desktop/Simulazioni_demo_fenics/Cahn-Hilliard/demo_cahn-hilliard.py", line 242, in <module>
    u.interpolate(u_init)
  File "/Applications/anaconda3/envs/fenicsproject/lib/python3.10/site-packages/dolfin/function/function.py", line 363, in interpolate
    self._cpp_object.interpolate(u._cpp_object)
RuntimeError: 

*** -------------------------------------------------------------------------
*** DOLFIN encountered an error. If you are not able to resolve this issue
*** using the information listed below, you can ask for help at
***
***     fenics-support@googlegroups.com
***
*** Remember to include the error message listed below and, if possible,
*** include a *minimal* running example to reproduce the error.
***
*** -------------------------------------------------------------------------
*** Error:   Unable to interpolate function into function space.
*** Reason:  Dimension 0 of function (3) does not match dimension 0 of function space (2).
*** Where:   This error was encountered inside FunctionSpace.cpp.
*** Process: 0
*** 
*** DOLFIN version: 2019.1.0
*** Git changeset:  
*** -------------------------------------------------------------------------

I guess you still have parts of your code altered from your previous posts.

I would strongly suggest copying everything from the source i pointed you to, and only change the mesh definition.

Thank you very much, I tried as suggested but I am facing another error:

Calling FFC just-in-time (JIT) compiler, this may take some time.
Calling FFC just-in-time (JIT) compiler, this may take some time.
Calling FFC just-in-time (JIT) compiler, this may take some time.
Calling FFC just-in-time (JIT) compiler, this may take some time.
Calling FFC just-in-time (JIT) compiler, this may take some time.
Calling FFC just-in-time (JIT) compiler, this may take some time.

UMFPACK V5.7.9 (Oct 20, 2019): ERROR: out of memory

Traceback (most recent call last):
  File "/Users/lorenzomarta/Desktop/Simulazioni_demo_fenics/Cahn-Hilliard/demo_cahn-hilliard.py", line 327, in <module>
    solver.solve(problem, u.vector())
RuntimeError: 

*** -------------------------------------------------------------------------
*** DOLFIN encountered an error. If you are not able to resolve this issue
*** using the information listed below, you can ask for help at
***
***     fenics-support@googlegroups.com
***
*** Remember to include the error message listed below and, if possible,
*** include a *minimal* running example to reproduce the error.
***
*** -------------------------------------------------------------------------
*** Error:   Unable to successfully call PETSc function 'KSPSolve'.
*** Reason:  PETSc error code is: 76 (Error in external library).
*** Where:   This error was encountered inside /Users/runner/miniforge3/conda-bld/fenics-pkgs_1649441947287/work/dolfin/dolfin/la/PETScKrylovSolver.cpp.
*** Process: 0
*** 
*** DOLFIN version: 2019.1.0
*** Git changeset:  
*** -------------------------------------------------------------------------

Well, if you are running with a unit cube with 96 by 96 by 96 subdivisions, that will be 5.3 million cells, and probably a too big problem to solve on a small machine. Might I suggest you reduce the problem size to 20 by 20 by 20?

1 Like

If I wanted to use mesh that I have available in .xml.gz format, I would simply replace:

mesh = UnitCubeMesh.create(20, 20, 20, CellType.Type.tetrahedron)

with

mesh = Mesh("../brain.xml.gz")

?

Yes, as in any Dolfin demo you can replace the built in meshes with ones read from file.

As I imagined, I tried replacing only the mesh definition line as above, but I get the following error message:

 File "/Users/lorenzomarta/Desktop/Simulazioni_demo_fenics/Cahn-Hilliard-brain/demo_cahn-hilliard.py", line 336, in <module>
    solver.solve(problem, u.vector())
RuntimeError: 

*** -------------------------------------------------------------------------
*** DOLFIN encountered an error. If you are not able to resolve this issue
*** using the information listed below, you can ask for help at
***
***     fenics-support@googlegroups.com
***
*** Remember to include the error message listed below and, if possible,
*** include a *minimal* running example to reproduce the error.
***
*** -------------------------------------------------------------------------
*** Error:   Unable to successfully call PETSc function 'KSPSolve'.
*** Reason:  PETSc error code is: 76 (Error in external library).
*** Where:   This error was encountered inside /Users/runner/miniforge3/conda-bld/fenics-pkgs_1649441947287/work/dolfin/dolfin/la/PETScKrylovSolver.cpp.
*** Process: 0
*** 
*** DOLFIN version: 2019.1.0
*** Git changeset:  
*** -------------------------------------------------------------------------


You seem to be running out of memory on your system. How big is your mesh and how much memory do you have available?

Very much, it is a mesh of a brain processed from an MRI. it is on the order of 60000 vertices (33.9MB). For available memory do you mean ram? In this case 16GB
Is there a way to process it without losing information?

You can use an iterative solver, and run the problem in parallel. See: How to choose the optimal solver for a PDE problem? - #2 by nate for information about solvers.

Is there any online example of this type?

Most DOLFIN codes runs naturally in parallel using mpirun -n num_procs python3 program.py.

For iterative solvers, I would suggest trying gmres with a jacobi preconditioner.
That would mean changing:
solver.parameters["linear_solver"]
and
solver.parameters["preconditioner"]

How could I do the same thing from the demo_poisson file? Besides the mesh, what should I change?

See: Bitbucket