Projection on reduced basis

I want to solve the evolution of the heat equation in 1D, i.e.

\partial_t u(t,x) = \partial_{xx}u(t,x),

using reduced order modelling. Using Dokken his tutorial on the heat equation I solved it by solving at each time step the linear equation

Au^n=F,

with A and F derived from the weak form of the heat equation. I stored solutions u^n of the full order system in a snapshot_matrix. From this snapshot matrix I retrieved a suitable reduced basis V_r by applying SVD. With this reduced basis V_r I want to create new reduced matrices A_r and F_r that are smaller than A and F and allow me to find reduced solution by solving

A_r q^n == F_r.

The solution to the heat equation follows by computing

u^n = V_r q^n.

The relationship between A_r and A is given by

A_r = V^\intercal_rA V_r,

and the relationship between F_r and F is given by

F_r = V^\intercal_rF.

Is there a good/nice way to assemble A_r and F_r using dolfinx?

Solution of the heat equation in 1D:

import ufl
from dolfinx import fem, mesh, io, plot
from mpi4py import MPI
from petsc4py import PETSc
import numpy as np

# Mesh and function space
Nx = 4096
domain = mesh.create_interval(MPI.COMM_WORLD, Nx, [0.0, 1.0])
# mesh = dolfinx.UnitIntervalMesh(MPI.COMM_WORLD, Nx)
V = fem.FunctionSpace(domain, ("Lagrange", 1))

# Time variable
dt = 0.00002
T = 0.025
times = np.arange(0,T,dt)

# Parameters
diffusion = 10
mu = 0.5
sigma = 0.02

# Initial conditions
class Gaussian:
    def __init__(self, mu, sigma):
        self.mu = mu
        self.sigma = sigma
    def __call__(self, x):
        return 1 / ( self.sigma * np.sqrt(2 * np.pi) ) * np.exp( - (x[0] - self.mu ) ** 2 / (2 * self.sigma ** 2))

gaussian = Gaussian(mu, sigma)

# Create boundary condition
fdim = domain.topology.dim - 1
boundary_facets = mesh.locate_entities_boundary(
    domain, fdim, lambda x: np.full(x.shape[1], True, dtype=bool))
bc = fem.dirichletbc(PETSc.ScalarType(0), fem.locate_dofs_topological(V, fdim, boundary_facets), V)

# Time scheme
theta = 0.0

# Variational problem at each time
f = fem.Constant(domain, PETSc.ScalarType(0))

u = ufl.TrialFunction(V)
v = ufl.TestFunction(V)
u_n = fem.Function(V)

backward = dt*diffusion*ufl.inner( ufl.grad(u_n), ufl.grad(v))*ufl.dx
forward = dt*diffusion*ufl.inner( ufl.grad(u), ufl.grad(v))*ufl.dx
F = (u-u_n)*v*ufl.dx + theta*backward + (1-theta)*forward

a = ufl.lhs(F)
L = ufl.rhs(F)

bilinear_form = fem.form(a)
linear_form = fem.form(L)

# Insert initial conditions in variational problem
u_n.interpolate(gaussian)

# Assemble stuff
A = fem.petsc.assemble_matrix(bilinear_form, bcs=[bc])
A.assemble()
b = fem.petsc.create_vector(linear_form)

# Define solver
solver = PETSc.KSP().create(domain.comm)
solver.setOperators(A)
solver.setType(PETSc.KSP.Type.PREONLY)
solver.getPC().setType(PETSc.PC.Type.LU)

# Solve the variational problem
uh = fem.Function(V)

snapshot_matrix = []
snapshot_times = []
max_iter = len(times)

for i, t in enumerate(times):
    # Update the right hand side reusing the initial vector
    with b.localForm() as loc_b:
        loc_b.set(0)
    fem.petsc.assemble_vector(b, linear_form)

    # Apply Dirichlet boundary condition to the vector
    fem.petsc.apply_lifting(b, [bilinear_form], [[bc]])
    b.ghostUpdate(addv=PETSc.InsertMode.ADD_VALUES, mode=PETSc.ScatterMode.REVERSE)
    fem.petsc.set_bc(b, [bc])

    # Solve linear problem
    solver.solve(b, uh.vector)
    uh.x.scatter_forward()

    # Update solution at previous time step (u_n)
    u_n.x.array[:] = uh.x.array

    # Store values
    if i % 5 == 0:
        snapshot_matrix.append(np.array(uh.x.array))
        snapshot_times.append(t)
        print(f"Added solution to stack, iteration {i} of max {max_iter}")
u, s, vh = np.linalg.svd(snapshot_matrix, full_matrices=False)
V_r = u[:,:r]

You should be able to use the PETSc.Mat and PETSc.Vec functions in petsc4py to compute these matrix-matrix-matrix problems.
However, as you seem to be using np.linalg.svd to get your singular value decomposition, the easiest way might be to convert the petsc-matrix A to a numpy matrix: Transform dolfinx assemble matrix to numpy array - #2 by dokken
and compute the matrix-matrix-matrix product using numpy.

1 Like

FYI, I am (slowly!) moving my former RBniCS code to dolfinx, see

Carrying out such a projection is a fundamental task in reduced basis methods, and thus you will find it available in the library itself, see in particular

I have taken a look at the RBniCSx package, and it seems to me that you do the following:

  1. Define a problem using bilinear forms a: \mathbb{V}\times\mathbb{V}\to\mathbb{R} and linear forms \ell: \mathbb{V}\to\mathbb{R}.
  2. Assemble matrix A and vector F, and solve Au=F for u.
  3. Form a correlation matrix X^\intercal X from the snapshot matrix X, where in this case X contains solutions for different \mu instead of different t.
  4. Compute eigenvalues and eigenvectors for the correlation matrix X^\intercal X, and use it to find the correct basis functions V_r.
  5. Define action functions \tilde{a}: \mathbb{V}_r\times\mathbb{V}_r\to\mathbb{R} and \tilde{\ell}: \mathbb{V}_r\to\mathbb{R} using .localForm() and .copy().
  6. Manually compute each entry of A_r and F_r by creating a matrix and a vector of appropriate sizes and doing A.setLocalValue(m,n,a(fun_m,fun_n), addv=petsc4py.PETSc.InsertMode.ADD) and vec.setValue(n,L(fun_n), addv=petsc4py.PETSc.InsertMode.ADD).
  7. Use ksp to solve A_rq = F_r for q.
  8. Reconstruct solution using u = V_r q.

However, the problems in the package are all time independent (correct me if I missed the one that is), whereas I am interested in a time dependent problem. In the time dependent problem the vector F_r needs to be updated using the previous solution q_{t-1}. Doing this by

  1. reconstructing u_{t-1}=V_rq_{t-1},
  2. computing F using u_{t-1},
  3. and projecting F using V_r to F_r

seems to me as very inefficient. Is there a way to update F_r whilst staying in the latent space representation?

There certainly is, due to linearity. You need to preassemble the action of the mass matrix, and assemble the contribution of the previous time step by multiplying the preassembled mass matrix with the latent space representation of the previous solution. This is fairly standard in reduced basis, any good RB textbook will explain this to you.

There will surely be at some point a time dependent example available in the package, but I have no ETA on that. Please get in touch with me via email for further discussion, if any.