Hi,
I am having difficulty trying to normalise vectors in fenics. My 3D vectors e0
are defined in a MixedElement
function space which exists along a 1D mesh. Ie, every vertex of the mesh contains a vector. As my simulation runs I update these vectors accordingly and at each step I normalise to unit length. My understanding is that e0 / sqrt(dot(e0, e0))
should be sufficient to do this. However, when inspecting these values they contain very large errors (up to 50%).
I converted the normalisation to use Expression
s instead which seems to work perfectly. However, this is incompatible with dolfin-adjoint
which I use to study inverse optimisation of the model. I have looked at Adding Custom Functions which uses normalisation as the worked example, but I have had no success trying to adapt this to work with ComponentTensor
s.
I would like to understand why the first method fails so poorly as I’ve seen this as a suggested normalisation approach elsewhere on this forum. If it is possible to fix this that would be ideal. Failing that, any advice on making the Expressions-approach compatible with dolfin-adjoint
would be also very much appreciated!
def test_normalisation():
# Set up the mesh and elements
N = 10
mesh = UnitIntervalMesh(N - 1)
P1 = FiniteElement('Lagrange', mesh.ufl_cell(), 1)
P1_3 = MixedElement([P1] * 3)
# V3 is the function space containing a 3D vector (e0) at each vertex of the mesh
V3 = FunctionSpace(mesh, P1_3)
e0 = Function(V3)
def _dof_maps(fs: FunctionSpace) -> np.ndarray:
"""Returns a numpy array for the dof maps of the function space"""
n_sub = fs.num_sub_spaces()
if n_sub > 0:
dof_map = np.array([_dof_maps(fs.sub(d)) for d in range(n_sub)])
else:
dof_map = np.array(fs.dofmap().dofs())
return dof_map
def fenics_to_numpy(var: Function) -> np.ndarray:
"""Returns a numpy array containing fenics function values"""
fs = var.function_space()
dof_maps = _dof_maps(fs)
vec = var.vector().get_local()
arr = np.zeros_like(dof_maps, dtype=np.float64)
for i in np.ndindex(dof_maps.shape):
arr[i] = vec[dof_maps[i]]
return arr
# Initialise the e0 vectors at random
values = np.random.randn(3, N)
dof_maps = _dof_maps(V3)
vec = e0.vector()
for i in np.ndindex(dof_maps.shape):
vec[dof_maps[i]] = values[i]
# Verify that the values are set correctly
e0_numpy = fenics_to_numpy(e0)
assert np.allclose(values, e0_numpy)
# Try to normalise the e0 vectors
# Method A:
e0_normalised = e0 / sqrt(dot(e0, e0))
e0_normalised = project(e0_normalised, V3)
e0_normalised_numpy = fenics_to_numpy(e0_normalised)
e0_norms = np.linalg.norm(e0_normalised_numpy, axis=0)
assert np.allclose(e0_norms, np.ones_like(e0_norms)) # Fails, quite badly
# Method B:
e0_norm = Expression('sqrt(v[0]*v[0]+v[1]*v[1]+v[2]*v[2])', v=e0, degree=1)
e0_normalised = Expression(('e[0]/n', 'e[1]/n', 'e[2]/n'), e=e0, n=e0_norm, degree=1)
e0_normalised = interpolate(e0_normalised, V3)
e0_normalised_numpy = fenics_to_numpy(e0_normalised)
e0_norms = np.linalg.norm(e0_normalised_numpy, axis=0)
assert np.allclose(e0_norms, np.ones_like(e0_norms)) # Succeeds, but doesn't work for adjoint/inverse problems