Hi everyone,
I am working on integrating a neural network-based constitutive model into a FEniCSx simulation and, importantly, I want to train the NN by minimizing a loss function based on full-field experimental data.
What I Understand So Far
- It is possible to define the NN architecture in UFL directly or use an ExternalOperator (dolfinx-external-operator) to deploy the NN inside the FEniCSx simulation.
- This approach works well for pre-trained models, where the NN is already fitted to stress-strain data (from standard material tests rather than full-field measurements).
- In past implementations using FEniCS (not FEniCSx), similar NN-in-UFL approaches were successfully trained using dolfin-adjoint. However, I’ve noticed that dolfin-adjoint has not been updated for FEniCSx, meaning there is currently no straightforward way to compute gradients of the FEM solution with respect to the NN parameters.
- I cannot use FEniCSx within automatic differentiation frameworks (such as JAX or PyTorch) because doing so breaks the computational graph required for backpropagation. I’ve seen that packages like pytorch-fenics and jax-fenics were also based on dolfin-adjoint.
- Numerical differentiation is an option but can be very costly, especially for high-dimensional NN parameters.
My Question
I am looking for insights on how to train an NN-based constitutive model inside FEniCSx when the loss function involves full-field data and the FEM simulation itself. If anyone has tackled a similar issue, I would greatly appreciate your thoughts or suggestions.
Thanks in advance for your help!