Transform vector function to RT space

Hello all, I have a 2D vector function defined in VectorFunctionSpace and want to transform it to a RT function in dolfinx. I’m appreciated if some one can help me.

The easiest way to do this will be to evaluate your function at the midpoint of each edge, then use these values to calculate the coefficients of in the Raviart-Thomas space. I’ll code up an example of this this afternoon.

The function you obtain however will not be exactly the same, as the RT functions span the space of vectors whose divergence is constant on each cell, and this is not necessarily true of your original function.

mscroggs, thanks for your reply and help. It may be import for transforming vector function and RT or Nedelec space function to each other. On the one hand, to do inner product of vector source function and RT or Nedelec test function for the equations which have vector source term and should be solved in RT space for example; On the other hand, to transform RT or Nedelec space solution to vector function, so the data can be exported for further analysis or for Paraview. I wrote a function to do transformation according to a post but with error.

def vector2RT(u0, mesh, r):
    element = 'RT'
    V = FunctionSpace(mesh, (element, r))
    u = TrialFunction(V)
    v = TestFunction(V)
    if r == 1:
        W = FunctionSpace(mesh, (element, 1))
        v0 = TestFunction(W)
    else:
        V0 = FiniteElement(element, mesh.ufl_cell(), r)
        V1 = VectorElement('DG', mesh.ufl_cell(),  r - 2)
        W = FunctionSpace(mesh, V0*V1)
        v0, v1 = TestFunctions(W)

    n = FacetNormal(mesh)
    a = inner(dot(u, n), dot(v0, n))('+') * dS + inner(dot(u, n), dot(v0, n)) * ds
    L = inner(dot(u0, n), dot(v0, n))('+') * dS + inner(dot(u0, n), dot(v0, n)) * ds
    if r > 1:
        a = a + inner(u, v1) * dx
        L = L + inner(u0, v1) * dx

    A = assemble_matrix(a)
    A.assemble()
    #A_reduced = assemble_matrix(inner(u, v) * dx(mesh))
    A_reduced = PETSc.Mat().create()
    A_reduced.setSizes([A.size[1], A.size[1]])
    A_reduced.setUp()
    b = assemble_vector(L)

    b_reduced = PETSc.Vec().create()
    b_reduced.setSizes(A.size[1])
    b_reduced.setUp()

    n_facet_dofs = A.size[0] - 2 * (A.size[0] - A.size[1])
    n_internal_dofs = V.dim() - n_facet_dofs

    for i in range(n_facet_dofs):
        nonzero_idx, values = A.getRow(i)
        for j, value in zip(nonzero_idx, values):
            A_reduced.setValue(i, j, value)
        b_reduced[i] = b[i]
    for i in range(n_internal_dofs):
        nonzero_idx, values = A.getRow(n_facet_dofs + n_internal_dofs + i)
        for j, value in zip(nonzero_idx, values):
            A_reduced.setValue(n_facet_dofs + i, j, value)
        b_reduced[n_facet_dofs + i] = b[n_facet_dofs + n_internal_dofs + i]

    A_reduced.assemble()
    b_reduced.assemble()

    u = Function(V)
    solver = PETSc.KSP().create(MPI.comm_world)
    solver.setFromOptions()
    solver.setOperators(A_reduced)
    solver.solve(b_reduced, u.vector)
    return u

What error are you getting? This code runs ok for me