MPC backsubstitution in parallel

Your workaround works for me as well! Thank you very much!

1 Like

I haven’t spotted this in the demo before, and I don’t see how it can be correct. One should never map a coordinate to not a number, which is what the output of that function now is.

What I assumed when I saw that usage was that, by mapping certain constraints with np.nan, those would in fact not be generated by mpc.create_periodic_constraint_topological, thus basically selecting which constraints not to actually add to mpc. But it’s just a guess…

Its a fair guess. Im worried that passing nans down to C++ might go wrong.
As Ive said earlier, if one could create a simpler problem, with just the single problematic interface as constraint, it would be a lot easier for me to look into what goes wrong.

The issue here is that func_sol is using the wrong function-space.
It should use:

func_sol = Function(mpc.function_space)

as this function space has the extra dofs required for backsubstitution.

1 Like

Furthermore, to have a unique solution to your problem, I added:


vol = domain.comm.allreduce(
    assemble_scalar(form(1 * dx)), op=MPI.SUM)
func_sol.x.array[:] -= T_fluc_avg/vol

such that the solution found is the one with 0 mean.

This works! Thank you so much!

Which extra DOFs do you mean? I thought that mpc.function_space is the same as the function space that is the input for creating the mpc class: mpc = dolfinx_mpc.MultiPointConstraint(V).

When you work in parallel, the degrees of freedom are distributed among the processes.
For the MPC to work, you need to add degrees of freedom from the constrained degrees of freedom to the process that has the DOFS governing the equation (and vici versa).

These additional dofs on each process is used to create the scalable assembly and back-substitution.

1 Like

Got it. Thank you very much!