Correct way to compute internal interface fluxes

I have read that there are some subtleties (over my head) when it comes to compute interfaces fluxes due to some order being a bit random. A solution/example of a correct way to compute them is given in Add one-sided integration example as example of custom integration · Issue #158 · jorgensd/dolfinx-tutorial · GitHub.

However in my case, I do get any significant difference. For example:
J_U through theta = pi/2 (+): 0.0001628704064196176 (incorrect) vs
0.0001628704064196176 (correct).
Yes, there is 0 difference between the 2 values, yet the manner they are computed looks totally different. (It’s the same for other fluxes on another internal interface).

My code is gigantic, here are the relevant parts. First, the supposedly correct way to compute fluxes:

        mesh = self.thermoelec_sim.the_mesh.mesh
        ct = self.thermoelec_sim.the_mesh.cell_markers   # cell tags (20, 21)
        ft = self.thermoelec_sim.the_mesh.facet_markers  # facet tags (22–27)
        n = self.thermoelec_sim.the_mesh.n

        tdim = mesh.topology.dim
        fdim = tdim - 1
        mesh.topology.create_entities(fdim)
        mesh.topology.create_connectivity(fdim, tdim)
        mesh.topology.create_connectivity(tdim, fdim)

        facet_map = mesh.topology.index_map(fdim)
        f_to_c = mesh.topology.connectivity(fdim, tdim)
        c_to_f = mesh.topology.connectivity(tdim, fdim)

        # -------------------------------
        # Helper: build one-sided entities
        # -------------------------------
        def build_one_sided_entities(facet_tag_id: int, chosen_cell_subdomain_id: int):
            facets_to_integrate = ft.find(facet_tag_id)
            integration_entities = []
            for facet in facets_to_integrate:
                if facet >= facet_map.size_local:
                    continue
                cells = f_to_c.links(facet)
                marked_cells = ct.values[cells]
                correct = np.flatnonzero(marked_cells == chosen_cell_subdomain_id)
                assert len(correct) == 1, (
                    f"Facet {facet} does not have exactly one cell with subdomain "
                    f"{chosen_cell_subdomain_id}. Found: {marked_cells}"
                )
                correct_cell = cells[correct[0]]
                local_facets = c_to_f.links(correct_cell)
                local_index = np.flatnonzero(local_facets == facet)
                assert len(local_index) == 1
                integration_entities.append(correct_cell)
                integration_entities.append(local_index[0])
            return np.asarray(integration_entities, dtype=np.int32)

        # -------------------------------
        # Define subdomains
        # -------------------------------
        SubA = 20  # three_quarter_material
        SubB = 21  # quarter_material

        # -------------------------------
        # Build one-sided measures
        # -------------------------------
        entities_theta0_from_A = build_one_sided_entities(22, SubA)
        entities_theta0_from_B = build_one_sided_entities(22, SubB)
        entities_theta90_from_A = build_one_sided_entities(23, SubA)
        entities_theta90_from_B = build_one_sided_entities(23, SubB)

        ds_theta0_from_A = Measure("ds", domain=mesh,
                                    subdomain_data=[(1001, entities_theta0_from_A)])
        ds_theta0_from_B = Measure("ds", domain=mesh,
                                    subdomain_data=[(1002, entities_theta0_from_B)])
        ds_theta90_from_A = Measure("ds", domain=mesh,
                                        subdomain_data=[(1003, entities_theta90_from_A)])
        ds_theta90_from_B = Measure("ds", domain=mesh,
                                        subdomain_data=[(1004, entities_theta90_from_B)])

        n = FacetNormal(mesh)

        # -------------------------------
        # Assemble fluxes
        # -------------------------------
        comm = mesh.comm
        def assemble(form):
            return comm.allreduce(assemble_scalar(form), op=MPI.SUM)

        print(f"""
        --- Using ds (correct one-sided orientation) ---
        J_U through theta = pi/2 (+): {assemble(form(dot(J_U_ufl, n) * ds_theta90_from_A(1003)))} (correct)
        """)

And here is the usual, incorrect way:

J_U_on_facet_plus = (
            - (κ * grad(temp)('+'))   
            + temp('+') * Seebeck_tensor('+') * J_vector('+')
            + volt('+') * J_vector('+')
        )
J_U_pi_over_2_plus = assemble_scalar(form(dot(J_U_on_facet_plus, n('+')) * dS(23)))
print(f"""Fluxes analysis:
J_U through theta = pi/2 (+): {J_U_pi_over_2_plus} (incorrect)
""")

Where dS(23) is the boundary between a quarter-annulus and a 3-quarters annulus (mesh created using gmsh).

I am wondering whether I am computing correctly the internal fluxes (this is very important for the results I get).
Thank you!

The key take-away is that:

  1. In most finite element methods, the jump of a quantity across a surface does not require a specific orientation. For instance, in DG methods, you get jump integrals across all internal facets of the grids, but mathematically the formulation doesn’t care which cell is positive and which one is negative.
  2. DOLFINx follows the same notion, that for an interior facet integral (dS) which cell comes first (+) and which one is last (-) doesn’t matter if put in the correct variational formulation.
  3. that you get the same result for the two is by chance. Running the problem in parallel might produce a different results as the mesh gets parititioned and the ording might differ.
  4. Using a one-sided integral will always be correct. You could manually make the interior facet entities orient, but that just requires more memory.
1 Like

Thanks @dokken for the reply and information.

Just to make sure I understand.
From 1 and 2, it looks like usually the “correct” and “incorrect” ways to compute these internal boundary facets or fluxes give the same result as long as the variational form is correct.
From 3, we learn that this is the case because, possibly, the program ran in a single thread as opposed to multi threading. Have I understood this correctly?
4. What do you mean exactly by one sided integral here? Do you mean the fluxes as I computed them in my previous post? If so, then I do not understand why 3 says I might have gotten the same result by luck.