How to solve in parallel in case of DG0 function space

Hi
I have a mesh which is an unit cube including 6 tetrahedron elements. I made it in GMSH. Here is the geo file:

Point(1) = {0, 0, 0,1};
Point(2) = {0, 1, 0,1};
Point(3) = {0, 0, 1,1};
Point(4) = {0, 1, 1,1};
Point(5) = {1, 0, 0,1};
Point(6) = {1, 1, 0,1};
Point(7) = {1, 0, 1,1};
Point(8) = {1, 1, 1,1};
Line(1) = {1, 3};
Line(2) = {7, 3};
Line(3) = {7, 5};
Line(4) = {5, 1};
Line(5) = {1, 2};
Line(6) = {2, 4};
Line(7) = {4, 3};
Line(8) = {2, 6};
Line(9) = {6, 8};
Line(10) = {8, 4};
Line(11) = {6, 5};
Line(12) = {8, 7};
Line Loop(13) = {8, 9, 10, -6};
Plane Surface(14) = {13};
Line Loop(15) = {8, 11, 4, 5};
Plane Surface(16) = {15};
Line Loop(17) = {6, 7, -1, 5};
Plane Surface(18) = {17};
Line Loop(19) = {4, 1, -2, 3};
Plane Surface(20) = {19};
Line Loop(21) = {11, -3, -12, -9};
Plane Surface(22) = {21};
Line Loop(23) = {7, -2, -12, 10};
Plane Surface(24) = {23};
Surface Loop(25) = {14, 16, 22, 20, 18, 24};
Volume(26) = {25};

Transfinite Surface {14,16,18,20,22,24};

Transfinite Line{1,2,3,4,5,6,7,8,9,10,11,12}=2;

Physical Surface(1) = {14};
Physical Surface(2) = {20};
Physical Volume(1) = {26};

Then I saved it in msh format and then converted to the XML format using:

dolfin-convert mesh.msh MESH.xml

Then I converted it to HDF5 format by doing:

from dolfin import *
mesh1 = Mesh("MESH.xml")
boundaries = MeshFunction("size_t", mesh1, "MESH_facet_region.xml")
subdomains = MeshFunction("size_t", mesh1, "MESH_physical_region.xml")

hdf = HDF5File(mesh1.mpi_comm(), "0file.h7", "w")
hdf.write(mesh1, "/mesh")
hdf.write(subdomains, "/subdomains")
hdf.write(boundaries, "/boundaries")

Now it comes to what I want to do. I want to solve a simple linear elasticity problem on the unit cube including 6 elements which is fixed from the left and a load is applied on the right face. The thing is, I want to assign different Young’s modulus to different tetrahedron elements. I did it using DG0 function space. Here is the implementation:

from dolfin import *

mesh = Mesh()
hdf = HDF5File(mesh.mpi_comm(), "0file.h7", "r")
hdf.read(mesh, "/mesh", False)

subdomains = MeshFunction('size_t', mesh, mesh.topology().dim())
hdf.read(subdomains, "/subdomains")
boundaries = MeshFunction('size_t', mesh, mesh.topology().dim()-1)
hdf.read(boundaries, "/boundaries")

n = FacetNormal(mesh)
DG = FunctionSpace(mesh, "DG", 0)
E = Function(DG)

E.vector()[0,1,2,3,4,5] = [1E6 , 2E6 , 3E6, 4E6 , 5E6 , 6E6]

dx = Measure('dx', domain=mesh, subdomain_data=subdomains)
ds = Measure('ds', domain=mesh, subdomain_data=boundaries)

nu = 0.3
force = 1000.
mu = project(E/2./(1.+nu),DG)
lmbda = E*nu/(1+nu)/(1-2*nu)

def eps(v):
    return sym(grad(v))

def sigma(v):
    return lmbda * tr(eps(v)) * Identity(3) + 2.0 * mu * eps(v)

V = VectorFunctionSpace(mesh, "CG", 1)
du = TrialFunction(V)
u_ = TestFunction(V)

a = inner(sigma(du), eps(u_)) * dx
l =  inner(force * n, u_) * ds(2)

u = Function(V)
solve(a == l, u, DirichletBC(V, Constant((0., 0.,0.)), boundaries, 1))

File("disp.pvd") << u

Now I want to solve it in parallel. I tried to run it using 4 processors by doing:

mpirun -np 4 python3 test.py

It returns this error:

Traceback (most recent call last):
  File "test.py", line 19, in <module>
    E.vector()[0,1,2,3,4,5] = [1E6 , 2E6 , 3E6, 4E6 , 5E6 , 6E6]
IndexError: Vector index out of range
Traceback (most recent call last):
  File "test.py", line 19, in <module>
    E.vector()[0,1,2,3,4,5] = [1E6 , 2E6 , 3E6, 4E6 , 5E6 , 6E6]
IndexError: Vector index out of range

It works fine if using only 1 processor by doing : mpirun -np 1 python3 test.py
But I want to use more than 1 processor.
The question is:
What is the trick to split the E array between different processors so it could be solved properly in parallel?

You should tag each individual tetrahedron with a cell marker, and use these to determine what youngs modulus is required on your processor.
See for instance:

You could create this marker with a serial run, and then load it in a slightly modified parallel program.

Usually, mesh markers are determined geometrically, or be loaded in as input from the mesh generator.

Note that as long as your mesh only have six cells, there is not going to be any speedup when running in parallel.

1 Like

The thing the actual problem that I am dealing with is much more complicated than this unit cube with 6 elements. I just wanted to come up with the trick to implement it in my problem. Anyways, thanks for the pointer. I’ll give it a try

Hi Dokken,

I have come across this posting and your answers from Googling.
Almost 2 years have passed since this posting, so I am not sure if you can read this one . But,

I am now doing a project using FEniCS (not FEniCSx) that a parallelization is needed because we need to solve a large-scale problem. So, I think your this answer could be workaround for our problem that we are facing.

Looking at your answer in “How to define materials/import 3D geometry”, the “modified parallel program” is a code that solves a problem with the the XDMFfile including the cell marker?

What is exactly the “modified parallel program” you mentioned?
Could you share it here?
It would be greatly appreciated!! Many thanks.