Export of vector function space as VTX

Hello,

I would like to export as VTX a solution of the 2D Navier-Stokes equations where the function space contains (\rho, \rho u, \rho v, \rho E). Let’s consider the following export:

from mpi4py import MPI
from dolfinx import fem, io, mesh
import ufl

msh = mesh.create_unit_square(MPI.COMM_WORLD, 16, 16)
V = fem.FunctionSpace(msh, ufl.VectorElement("DG", msh.ufl_cell(), 2, 4))
u_vec = fem.Function(V, name="solution")

with io.VTXWriter(msh.comm, "solution.bp", [u_vec], engine="BP4") as vtx:
  vtx.write(0.0)

This piece of code runs successfully and creates as expected the directory solution.bp. The latter can be opened with Paraview. However in Paraview, only the variable “solution” is available. It is defined as a vector of length 3 containing \rho, \rho u and \rho v. I’m not happy about that because:

  1. \rho E was lost in the export,
    2 I can not plot streamlines because the velocity/momentum vector is not recognized.

How can I modify the export such that Paraview offers the variables \rho, \rho U (vector of length 2 containing \rho u and \rho v) and \rho E?

Best,

Lucas

In the code above, you have only stored u_vec, the function solution, so how do you expect Paraview to have any notion regarding rho?
If you want to store rho*u, you should store it explicitly to the file.

If u_vec is the solution of the Navier-Stokes it already contains \rho, \rho u, \rho v and \rho E.

As explained in my first post:

This piece of code runs successfully and creates as expected the directory solution.bp. The latter can be opened with Paraview. However in Paraview, only the variable “solution” is available. It is defined as a vector of length 3 containing ρ, ρu and ρv.

The problem is that Paraview sees (\rho, \rho u, \rho v) as a vector of length 3 and that \rho E is lost.

Sorry, it was hard for me to understand the split of the variables in the first post.
You can do something along the lines of:

from mpi4py import MPI
from dolfinx import fem, io, mesh
import ufl

msh = mesh.create_unit_square(MPI.COMM_WORLD, 16, 16)
V = fem.FunctionSpace(msh, ufl.VectorElement("DG", msh.ufl_cell(), 2, 4))
u_vec = fem.Function(V, name="solution")
u_vec.interpolate(lambda x: (x[0], x[1], -x[1], x[1]+2*x[0]))


rho = u_vec.sub(0).collapse()
rho.name = "rho"
W = fem.FunctionSpace(msh, ufl.VectorElement("DG", msh.ufl_cell(), 2, 2))
rho_uv = fem.Function(W)
rho_uv.name = "uv"
rho_uv.sub(0).interpolate(u_vec.sub(1))
rho_uv.sub(1).interpolate(u_vec.sub(2))
rho_E = u_vec.sub(3).collapse()
rho_E.name = "E"
with io.VTXWriter(msh.comm, "solution.bp", [rho, rho_uv, rho_E], engine="BP4") as vtx:
    vtx.write(0.0)

To enforce the split of the variables.

Thanks that’s exactly what I need!

However something is wrong with the ghost cells of rho_uv, here is what I obtain on the magnitude in Paraview:

Don’t forget to scatter forward the ghost values after interpolation as appropriate. Something like

rho_uv.x.scatter_forward()

Thanks Nate for your reply. Actually I tried to use scatter_forward but it did not help.

Do you have a reproducible example of this behavior?

The script below will do it. It is a simplified version of this demo where I simply append the export as VTX (and the splitting into \rho, \rho \underline{u} and \rho E). It requires dolfin_dg and this file.

import numpy as np
from mpi4py import MPI
from petsc4py import PETSc
import ufl
import dolfinx
import dolfin_dg
import dolfin_dg.dolfinx.dwr
import dolfin_dg.dolfinx.mark
import dolfinx.fem.petsc as fem_petsc
import dolfinx.nls.petsc as nls_petsc

import generate_mesh

# In this example we use dual weighted residual based error estimates
# to compute the drag coefficient of compressible flow around a NACA0012
# aerofoil.

def info(*msg):
    PETSc.Sys.Print(", ".join(map(str, msg)))

mesh = generate_mesh.generate_naca_4digit(
    MPI.COMM_WORLD, *generate_mesh.parse_naca_digits("0012"), rounded=False,
    gmsh_options={"Mesh.MeshSizeFromCurvature": 80})

# Polynomial order
poly_o = 1

# Initial inlet flow conditions
rho_0 = 1.0
M_0 = 0.5
Re_0 = 5e3
p_0 = 1.0
gamma = 1.4
attack = np.radians(2.0)

# Inlet conditions
c_0 = abs(gamma*p_0/rho_0)**0.5
Re = Re_0/(gamma**0.5*M_0)
n_in = dolfinx.fem.Constant(
    mesh, np.array([np.cos(attack), np.sin(attack)], dtype=np.double))
u_ref = dolfinx.fem.Constant(mesh, M_0*c_0)
rho_in = dolfinx.fem.Constant(mesh, rho_0)
u_in = u_ref * n_in

# The initial guess used in the Newton solver. Here we use the inlet flow.
rhoE_in_guess = dolfin_dg.aero.energy_density(p_0, rho_in, u_in, gamma)
gD_guess = ufl.as_vector((rho_in, rho_in*u_in[0], rho_in*u_in[1], rhoE_in_guess))

# Assign variable names to the inlet, outlet and adiabatic wall BCs. These
# indices will be used to define subsets of the boundary.
INLET = 3
OUTLET = 4
WALL = 2

# Record information in this list
results = []

# Maximum number of refinement levels
n_ref_max = 1
for ref_level in range(n_ref_max):
    info(f"Refinement level {ref_level}")

    # Label the boundary components of the mesh. Initially label all exterior
    # facets as the adiabatic wall, then label the exterior facets far from
    # the aerofoil as the inlet and outlet based on the angle of attack.
    bdry_facets = dolfinx.mesh.locate_entities_boundary(
        mesh, mesh.topology.dim-1,
        lambda x: np.full_like(x[0], 1, dtype=np.int8))

    midpoints = dolfinx.mesh.compute_midpoints(
        mesh, mesh.topology.dim-1, bdry_facets)
    f_normals = dolfinx.cpp.mesh.cell_normals(
        mesh._cpp_object, mesh.topology.dim-1, bdry_facets)

    inner_facets = np.linalg.norm(midpoints, axis=1) < 10.0
    inlet = np.dot(f_normals[:,:2], n_in.value) < 0.0

    values = np.zeros(midpoints.shape[0], dtype=np.int32)
    values[inlet] = INLET
    values[~inlet] = OUTLET
    values[inner_facets] = WALL

    fts = dolfinx.mesh.meshtags(mesh, mesh.topology.dim-1, bdry_facets, values)

    ds = ufl.Measure('ds', domain=mesh, subdomain_data=fts)

    # Problem function space, (rho, rho*u1, rho*u2, rho*E)
    V = dolfinx.fem.VectorFunctionSpace(mesh, ("DG", poly_o), dim=4)
    n_dofs = mesh.comm.allreduce(
        V.dofmap.index_map.size_local * V.dofmap.index_map_bs, MPI.SUM)
    info(f"Problem size: {n_dofs} degrees of freedom")

    u_vec = dolfinx.fem.Function(V, name="u")
    if ref_level == 0:
        # Use the initial guess.
        u_vec.interpolate(
            dolfinx.fem.Expression(gD_guess, V.element.interpolation_points()))
    else:
        # Initial guess by interpolating from old mesh to new mesh
        interp_data = dolfinx.fem.create_nonmatching_meshes_interpolation_data(
            u_vec.function_space.mesh._cpp_object,
            u_vec.function_space.element,
            u_vec_old.function_space.mesh._cpp_object)
        u_vec.interpolate(u_vec_old, nmm_interpolation_data=interp_data)
    u_vec.x.scatter_forward()
    v_vec = ufl.TestFunction(V)

    # The subsonic inlet, adiabatic wall and subsonic outlet conditions
    inflow = dolfin_dg.aero.subsonic_inflow(rho_in, u_in, u_vec, gamma)
    no_slip_bc = dolfin_dg.aero.no_slip(u_vec)
    outflow = dolfin_dg.aero.subsonic_outflow(p_0, u_vec, gamma)

    # Assemble these conditions into DG BCs
    bcs = [dolfin_dg.DGDirichletBC(ds(INLET), inflow),
           dolfin_dg.DGAdiabticWallBC(ds(WALL), no_slip_bc),
           dolfin_dg.DGDirichletBC(ds(OUTLET), outflow)]

    # Construct the compressible Navier Stokes DG formulation, and compute
    # the symbolic Jacobian
    h = ufl.CellVolume(mesh)/ufl.FacetArea(mesh)
    ce = dolfin_dg.CompressibleNavierStokesOperator(mesh, V, bcs, mu=1.0/Re)
    F = ce.generate_fem_formulation(u_vec, v_vec, h_measure=h, c_ip=20.0)
    J = ufl.derivative(F, u_vec)

    # Set up the problem and solve
    problem = fem_petsc.NonlinearProblem(F, u_vec, J=J)
    solver = nls_petsc.NewtonSolver(mesh.comm, problem)

    def updater(solver, dx, x):
        # TODO: This causes a memory leak
        tau = 1.0
        theta = min((2.0*tau/dx.norm())**0.5, 1.0)
        x.axpy(-theta, dx)

    solver.set_update(updater)

    ksp = solver.krylov_solver
    opts = PETSc.Options()
    option_prefix = ksp.getOptionsPrefix()
    opts[f"{option_prefix}ksp_type"] = "preonly"
    opts[f"{option_prefix}pc_type"] = "lu"
    opts[f"{option_prefix}pc_factor_mat_solver_type"] = "mumps"
    ksp.setFromOptions()
    solver.solve(u_vec)
    u_vec.x.scatter_forward()


rho = u_vec.sub(0).collapse()
rho.name = "rho"
W = dolfinx.fem.FunctionSpace(mesh, ufl.VectorElement("DG", mesh.ufl_cell(), poly_o, 2))
rho_uv = dolfinx.fem.Function(W)
rho_uv.name = "uv"
rho_uv.sub(0).interpolate(u_vec.sub(1))
rho_uv.sub(1).interpolate(u_vec.sub(2))
rho_uv.x.scatter_forward()
rho_E = u_vec.sub(3).collapse()
rho_E.name = "E"
with dolfinx.io.VTXWriter(mesh.comm, "solution.bp", [rho, rho_uv, rho_E], engine="BP4") as vtx:
    vtx.write(0.0)

Ah. This is from my project. I’ll look into it.

I’ve simplified the issue down to this:

import numpy as np
from mpi4py import MPI
import dolfinx

mesh = dolfinx.mesh.create_unit_square(MPI.COMM_WORLD, 4, 4)

V = dolfinx.fem.VectorFunctionSpace(mesh, ("DG", 1), dim=2)
u_vec = dolfinx.fem.Function(V, name="u")

u_vec.interpolate(lambda x: np.stack((x[0], x[1])))
u1 = u_vec.sub(0).collapse()
u1.name = "u1"

# Fails in parallel with odd ghosting issues
with dolfinx.io.VTXWriter(
        mesh.comm, "solution.bp", [u_vec, u1], engine="BP4") as vtx:
    vtx.write(0.0)

# Works fine
with dolfinx.io.VTXWriter(
        mesh.comm, "solution_u1.bp", [u1], engine="BP4") as vtx:
    vtx.write(0.0)

Running this in parallel yields erroneous results in the first output case to solution.bp; however, solution_u1.bp looks fine.

As a workaround until this issue can be fixed, you can try outputting each function to an individual dataset. This is verbose and not ideal. Hopefully future versions of dolfinx will have fixed this.

with dolfinx.io.VTXWriter(mesh.comm, "solution_rho.bp", [rho], engine="BP4") as vtx:
    vtx.write(0.0)
with dolfinx.io.VTXWriter(mesh.comm, "solution_rho_uv.bp", [rho_uv], engine="BP4") as vtx:
    vtx.write(0.0)
with dolfinx.io.VTXWriter(mesh.comm, "solution_rho_E.bp", [rho_E], engine="BP4") as vtx:
    vtx.write(0.0)

The problem is that the dofmap for a collapsed subspace is not the same as the full space. @jpdean highlighted this is in

and we need to add appropriate checks to VTXWriter, or refactor the collapsing to have the same layout

1 Like

Thank you guys for your replies and for the work-around.

The issue is currently tracked at: [BUG]: VTXWriter outputs erroneous data in parallel when outputting collapsed space · Issue #2929 · FEniCS/dolfinx · GitHub