How to plot solutions while running fenics code in parallel?

I tried to run my fenics code in both serial and parallel in the HPC cluster.

First I ran the code in serial in the HPC cluster and then I plotted the vtu file using paraview software. In this case I am getting a good plot.

But when I run the same code using “mpirun -n 2 python3” , and plotted the corresponding vtu file using paraview. I am getting a corrupted plot.

I don’t know why? Could you please help me out to plot a good figure while I run in parallel. Thank you!

How did you get vtu file ? If you’re using old dolfin, you should use XDMF format that supports MPI. XDMF can be viewed with Paraview.

I would suggest adding your minimal example here, so that people could reproduce the error themself.
Please add it with the following encapsulation

# add code here

import random
from IPython.display import HTML, display
from fenics import *
from matplotlib import pyplot
from mpi4py import MPI


Initialize MPI

rank = comm.Get_rank()

Define progress function for display

def progress(value, max=100):
return HTML(“”"
Iterations: {value} of {max}
style=‘width: 100%’

“”".format(value=value, max=max))
t = 0
dt = 0.09
n_steps = round(100/dt)

d1 = 0.001
d2 = 80

chi = 0 # stable if chi>4.8280 and unstable if 0<\chi<4.8280
zeta = 0.5
b = 10

Define mesh, initial values, and function spaces

mesh = RectangleMesh(Point(0, 0), Point(40, 40), 50, 50)
rn1 = random.random()
rn2 = random.random()
initial = Expression((“0.0253371+0.01*(cos(pix[0]/40))(cos(pix[1]/40))",
P1 = FiniteElement(“P”, mesh.ufl_cell(), 1)
element = MixedElement([P1, P1])
W = FunctionSpace(mesh, element)
old_q = interpolate(initial, W)
#num_dof = W.dim()

Define time parameters

“”“Main routine”“”
for i in range(1, n_steps + 1):


t += dt

q = Function(W)
phi = TestFunction(W)

u, v = split(q)
phi1, phi2 = split(phi)
old_u, old_v = split(old_q)

dot_u = (u - old_u) / dt
dot_v = (v - old_v) / dt

if chi == 0:
    F = dot_u * phi1 * dx + d1 * dot(grad(u), grad(phi1)) * dx 
    F = dot_u * phi1 * dx + d1 * dot(grad(u), grad(phi1)) * dx + chi * u  * dot(grad(v), grad(phi1)) * dx

F += dot_v * phi2 * dx + d2 * dot(grad(v), grad(phi2)) * dx + zeta * v  * phi2 * dx 

bc = []

Jac = derivative(F, q)

solve(F == 0, q, bc, J=Jac, solver_parameters={"newton_solver": {"relative_tolerance": 1e-6}})

# Save solution to file in VTK form

if i == 1:
    vtkfile = File('demo_prey_{}.pvd'.format(i))
    vtkfile << q
    pyplot.xlabel('x', fontsize=14)
    pyplot.ylabel('y', fontsize=14)
    p1 = plot(u)

    pyplot.xlabel('x', fontsize=14)
    pyplot.ylabel('y', fontsize=14)
    p2 = plot(v)

if i % 10 == 0:
    vtkfile = File('demo_PP_{}.pvd'.format(i))
    vtkfile << q
    pyplot.xlabel('x', fontsize=14)
    pyplot.ylabel('y', fontsize=14)
    p3 = plot(u)

    pyplot.xlabel('x', fontsize=14)
    pyplot.ylabel('y', fontsize=14)
    p4 = plot(v)
#vtkfile = File('demo_prey_{}.pvd'.format(i))
#vtkfile << q

Finalize MPI


No, I am using latest version of dolfin.

Please format your code as I described above.

There is newer version of dolfin called dolfinx and I just wanted to ask if you’re using dolfin or dolfinx. Anyway, you are using File, but you should use XDMDFile class.

Thank you very much. I have installed dolfin version 2019.2.0.dev0. Please share me a sample file which contains XDMDFile class.

See for instance: Update node coordinates in paraview - #5 by dokken

To add the answer above, you need to pass MPI.comm_world to run parallel. So, it would be something like

outfile = XDMFFile(MPI.comm_world, “some_name”)
outfile.write(u, t)