Question about saving the output in a more compact manner in parallel

Hi,

I was wondering if there is a way besides using .pvd to save the outputs. When I run a time dependent code on 40 processors and save the output in .pvd it saves 40 chunks of the domain for each processor in .vtu. Then a .pvtu that gathers them for each time and then a .pvd that collects everything. I was wondering if there was a more compact and parallel compatible way to do this. Thank you so much for your help.

Best,

Look into using the scalable XDMFFile with h5 binary data. E.g. XDMFFile("something.xdmf").write(mesh)

2 Likes

Thank you Nate. I tried it and it works fine with functions changing in time. However, I have issues with meshfunctions. Suppose I have a mesh function that changes in time and I want to save it at each time step using the code below:

    from fenics import *
    mesh = UnitSquareMesh(10, 10)
    Inside = MeshFunction("size_t",mesh,2)
    V= VectorFunctionSpace(mesh,'P',1)
    u = Expression(('1','1'),degree=0)
    U = project(u,V)
    t = 0.0
    n=0
    file1 = XDMFFile(MPI.comm_world, "Inside.xdmf")
    file1.parameters["flush_output"] = True
    for n in range(100):
        ALE.move(mesh,U)
        file1.write(Inside)
        t+=1.0

This will give me an error on Paraview saying “unable to read data”. And when I replace

file1.write(Inside)

with

file1.write(Inside,t)

I get the following AttributeError from my code:

‘dolfin.cpp.mesh.MeshFunctionSizet’ object has no attribute ‘_cpp_object’

Do you know what’s happening here?