Normalising vectors gives poor results

Hi,

I am having difficulty trying to normalise vectors in fenics. My 3D vectors e0 are defined in a MixedElement function space which exists along a 1D mesh. Ie, every vertex of the mesh contains a vector. As my simulation runs I update these vectors accordingly and at each step I normalise to unit length. My understanding is that e0 / sqrt(dot(e0, e0)) should be sufficient to do this. However, when inspecting these values they contain very large errors (up to 50%).

I converted the normalisation to use Expressions instead which seems to work perfectly. However, this is incompatible with dolfin-adjoint which I use to study inverse optimisation of the model. I have looked at Adding Custom Functions which uses normalisation as the worked example, but I have had no success trying to adapt this to work with ComponentTensors.

I would like to understand why the first method fails so poorly as I’ve seen this as a suggested normalisation approach elsewhere on this forum. If it is possible to fix this that would be ideal. Failing that, any advice on making the Expressions-approach compatible with dolfin-adjoint would be also very much appreciated!

def test_normalisation():
    # Set up the mesh and elements
    N = 10
    mesh = UnitIntervalMesh(N - 1)
    P1 = FiniteElement('Lagrange', mesh.ufl_cell(), 1)
    P1_3 = MixedElement([P1] * 3)

    # V3 is the function space containing a 3D vector (e0) at each vertex of the mesh
    V3 = FunctionSpace(mesh, P1_3)
    e0 = Function(V3)


    def _dof_maps(fs: FunctionSpace) -> np.ndarray:
        """Returns a numpy array for the dof maps of the function space"""
        n_sub = fs.num_sub_spaces()
        if n_sub > 0:
            dof_map = np.array([_dof_maps(fs.sub(d)) for d in range(n_sub)])
        else:
            dof_map = np.array(fs.dofmap().dofs())

        return dof_map

    def fenics_to_numpy(var: Function) -> np.ndarray:
        """Returns a numpy array containing fenics function values"""
        fs = var.function_space()
        dof_maps = _dof_maps(fs)
        vec = var.vector().get_local()
        arr = np.zeros_like(dof_maps, dtype=np.float64)
        for i in np.ndindex(dof_maps.shape):
            arr[i] = vec[dof_maps[i]]

        return arr

    # Initialise the e0 vectors at random
    values = np.random.randn(3, N)
    dof_maps = _dof_maps(V3)
    vec = e0.vector()
    for i in np.ndindex(dof_maps.shape):
        vec[dof_maps[i]] = values[i]

    # Verify that the values are set correctly
    e0_numpy = fenics_to_numpy(e0)
    assert np.allclose(values, e0_numpy)

    # Try to normalise the e0 vectors

    # Method A:
    e0_normalised = e0 / sqrt(dot(e0, e0))
    e0_normalised = project(e0_normalised, V3)
    e0_normalised_numpy = fenics_to_numpy(e0_normalised)
    e0_norms = np.linalg.norm(e0_normalised_numpy, axis=0)
    assert np.allclose(e0_norms, np.ones_like(e0_norms))  # Fails, quite badly

    # Method B:
    e0_norm = Expression('sqrt(v[0]*v[0]+v[1]*v[1]+v[2]*v[2])', v=e0, degree=1)
    e0_normalised = Expression(('e[0]/n', 'e[1]/n', 'e[2]/n'), e=e0, n=e0_norm, degree=1)
    e0_normalised = interpolate(e0_normalised, V3)
    e0_normalised_numpy = fenics_to_numpy(e0_normalised)
    e0_norms = np.linalg.norm(e0_normalised_numpy, axis=0)
    assert np.allclose(e0_norms, np.ones_like(e0_norms))  # Succeeds, but doesn't work for adjoint/inverse problems

The error in Method A comes from the projection to V3, since the normalized vector field is not in V3. You can verify that e0_normalised is correctly normalized at quadrature points (where it would be evaluated when assembling any Form) by checking a functional (e.g., Lebesgue or Sobolev) norm of the error in its magnitude:

    # Method A:
    e0_normalised = e0 / sqrt(dot(e0, e0))
    #e0_normalised = project(e0_normalised, V3)
    #e0_normalised_numpy = fenics_to_numpy(e0_normalised)
    #e0_norms = np.linalg.norm(e0_normalised_numpy, axis=0)
    #assert np.allclose(e0_norms, np.ones_like(e0_norms))  # Fails, quite badly

    # Check error in vector magnitude using L2 norm:
    err = sqrt(dot(e0_normalised,e0_normalised))-1.0
    L2_err_norm = sqrt(assemble(err*err*dx))
    assert near(L2_err_norm,0.0)

The e0_normalised produced by Method B also contains errors, but they are at quadrature points on the interiors of elements, not at nodes (because the Expression is interpolated exactly at nodes), and are thus missed by checking nodal values. You can see that a similar magnitude check based on a functional norm fails for Method B:

    # Method B:
    e0_norm = Expression('sqrt(v[0]*v[0]+v[1]*v[1]+v[2]*v[2])', v=e0, degree=1)
    e0_normalised = Expression(('e[0]/n', 'e[1]/n', 'e[2]/n'), e=e0, n=e0_norm, degree=1)
    e0_normalised = interpolate(e0_normalised, V3)
    #e0_normalised_numpy = fenics_to_numpy(e0_normalised)
    #e0_norms = np.linalg.norm(e0_normalised_numpy, axis=0)
    #assert np.allclose(e0_norms, np.ones_like(e0_norms))  # Succeeds, but doesn't work for adjoint/inverse problems

    # Check error in functional norm (using evaluations
    # at quadrature points:
    err = sqrt(dot(e0_normalised,e0_normalised))-1.0
    L2_err_norm = sqrt(assemble(err*err*dx))
    assert near(L2_err_norm,0.0) # Fails

Thus, Method A is the correct one to use within variational formulations, and there is no exact representation of the non-polynomial normalized vector field in a finite element function space.

1 Like

Many thanks for your detailed reply.

Can you suggest what would be the best way to extract the function values at the vertices for plotting in another application?

I understand you might have to project or interpolate into the function space to get these values, but project doesn’t seem to give the expected values and I’m not sure how to do interpolation.

Using an Expression is probably the cheapest way to get accurate point values at vertices for visualization. I’ll also mention that the error in the projection is artificially amplified by the use of random numbers to define e0 in your MWE. If the vector field varied smoothly in space (as is often the case in physical examples), the projection error would be much smaller.

Thanks, that helps a great deal. You are right that in my actual problem the errors are smaller - but still significant!

@kamensky If I might bother you again :slight_smile: Using an Expression still requires a projection of the normalised vector field. In the updated MWE below the line e0_normalised = project(e0_normalised, V3) must be uncommented for the code to run. Is this to be expected? It seems a bit long-winded.

– edit
Should have mentioned that while the project is needed for compilation, it again reintroduces the errors.

def test_normalisation():

    # Set up the mesh and elements
    N = 10
    mesh = UnitIntervalMesh(N - 1)
    P1 = FiniteElement("Lagrange", mesh.ufl_cell(), 1)
    P1_3 = MixedElement([P1] * 3)

    # V3 is the function space containing a 3D vector (e0) at each vertex of the mesh
    V3 = FunctionSpace(mesh, P1_3)
    e0 = Function(V3)

    def _dof_maps(fs: FunctionSpace) -> np.ndarray:
        """Returns a numpy array for the dof maps of the function space"""
        n_sub = fs.num_sub_spaces()
        if n_sub > 0:
            dof_map = np.array([_dof_maps(fs.sub(d)) for d in range(n_sub)])
        else:
            dof_map = np.array(fs.dofmap().dofs())
        return dof_map

    def fenics_to_numpy(var: Function) -> np.ndarray:
        """Returns a numpy array containing fenics function values"""
        fs = var.function_space()
        dof_maps = _dof_maps(fs)
        vec = var.vector().get_local()
        arr = np.zeros_like(dof_maps, dtype=np.float64)
        for i in np.ndindex(dof_maps.shape):
            arr[i] = vec[dof_maps[i]]
        return arr

    # Initialise the e0 vectors at random
    values = np.random.randn(3, N)
    dof_maps = _dof_maps(V3)
    vec = e0.vector()
    for i in np.ndindex(dof_maps.shape):
        vec[dof_maps[i]] = values[i]

    # Verify that the values are set correctly
    e0_numpy = fenics_to_numpy(e0)
    assert np.allclose(values, e0_numpy)

    # Try to normalise the e0 vectors
    e0_normalised = e0 / sqrt(dot(e0, e0))

    # use e0_normalised in variational form
    err = sqrt(dot(e0_normalised, e0_normalised)) - 1.0
    L2_err_norm = sqrt(assemble(err * err * dx))
    assert near(L2_err_norm, 0.0)

    # interpolate e0_normalised to get values out
    # e0_normalised = project(e0_normalised, V3) # put this line in to allow compilation

    expr = Expression(
        ["v[0]", "v[1]", "v[2]"],
        v=e0_normalised,
        degree=1,
    )
    e0_interp = interpolate(expr, V3)
    e0_n = fenics_to_numpy(e0_interp)
    e0_norms = np.linalg.norm(e0_n, axis=0)
    assert np.allclose(e0_norms, np.ones_like(e0_norms))  # Fails

To get a nodally-normalized field, you would need to do the normalization in the string passed to the Expression constructor, as in “Method B” from your original post.

In sci-kit learn, there is a API called MinMaxScaler which can customize the the value range as you like. It also deal with NaN issues.