H10 norm and H1 norm

Hello,
On this tutorial in the section “Computing error norms” : https://jsdokken.com/dolfinx-tutorial/chapter4/convergence.html#computing-convergence-rates

I could see that we defined the H01 norm like this:

eh = uh - u_ex
error_H10 = form(dot(grad(eh), grad(eh)) * dx)
E_H10 = np.sqrt(comm.allreduce(assemble_scalar(error_H10), op=MPI.SUM))
if comm.rank == 0:
    print(f"H01-error: {E_H10:.2e}")

When the H1 norm is (with an integral which is missing on the whole formula):

norm

H1_norm_squared = inner(grad(u), grad(u))dx +  inner(u, u)dx
H1_norm = sqrt(H1_norm_squared) 

I wanted to know why in this tutorial to make the comparison with L2 norm, , they say it’s better to do it with H10 and not with H1?

Thank you

Where in the tutorial do I state this?
What I state is:

You can of course compute any norm that is of interest to you. If you need H^1, compute the H^1 norm, if you need H_0^1 then compute it.

sorry I expressed myself badly. It says “sometimes it’s of interest”. and I wanted to know what you mean by this sentence. Are there any cases where is better to do so?

There are certain cases, where you have mathematical formulas for expected convergence rates in L^2, H_0^1 and H^1 individually, and for such cases you want to be able to compute each of them.

Ok thank you very much for your answers.

I have one last question. The convergence rates present on this link are the theoretical values, but if we want to have numerical values ​​of our problem how to do ? I just modify the u_h and the u_ex by those of my problem?

https://jsdokken.com/dolfinx-tutorial/chapter4/convergence.html#computing-convergence-rates

Thank you.

These are the numerical values (which matches the theoretical values).
To get convergence rates, you need to have an analytical solution to compare against.
The numerical convergence rate is then computed by comparing the approximation to the exact solution.

Is there a question here? You have the errors, then it is straightforward to compute the convergence rates

Well 've followed your tutorial to get the errors, it works, but can’t get the convergence rates

The rates are explicitly defined (both mathematically and for a given error Es and mesh size hs in the tutorial

That’s what I did, but don’t get a list like yours (that I want)

def solve(n_mesh=5, 
                    R_i=1., 
                    R_e=6.9,
                    aspect_ratio=1.,
                    E_m=0.8,
                    nu_m=0.35,
                    E_i=11.,
                    nu_i=0.3,
                    mesh_order=1,
                    dim_func=1,
                    dim_geom=2):
                    



    #MESH DESIGN
    mesh_size = R_i/n_mesh
    mesh_order = mesh_order

    mesh_comm = MPI.COMM_WORLD
    model_rank = 0
    gmsh.initialize()
    facet_names = {"inner_boundary": 1, "outer_boundary": 2}
    cell_names = {"inclusion": 1, "matrix": 2}
    model = gmsh.model()
    model.add("Disk")
    model.setCurrent("Disk")
    gdim = dim_geom # geometric dimension of the mesh
    inner_disk = gmsh.model.occ.addDisk(0, 0, 0, R_i, aspect_ratio * R_i)
    outer_disk = gmsh.model.occ.addDisk(0, 0, 0, R_e, R_e)
    whole_domain = gmsh.model.occ.fragment([(gdim, outer_disk)], [(gdim, inner_disk)])
    gmsh.model.occ.synchronize()

    # Add physical tag for bulk
    inner_domain = whole_domain[0][0]
    outer_domain = whole_domain[0][1]
    model.addPhysicalGroup(gdim, [inner_domain[1]], tag=cell_names["inclusion"])
    model.setPhysicalName(gdim, inner_domain[1], "Inclusion")
    model.addPhysicalGroup(gdim, [outer_domain[1]], tag=cell_names["matrix"])
    model.setPhysicalName(gdim, outer_domain[1], "Matrix")

    # Add physical tag for boundaries
    lines = gmsh.model.getEntities(dim=1)
    inner_boundary = lines[1][1]
    outer_boundary = lines[0][1]
    gmsh.model.addPhysicalGroup(1, [inner_boundary], facet_names["inner_boundary"])
    gmsh.model.addPhysicalGroup(1, [outer_boundary], facet_names["outer_boundary"])
    gmsh.option.setNumber("Mesh.CharacteristicLengthMin",mesh_size)
    gmsh.option.setNumber("Mesh.CharacteristicLengthMax",mesh_size)
    model.mesh.generate(gdim)
    gmsh.option.setNumber("General.Terminal", 1)
    model.mesh.setOrder(mesh_order)
    gmsh.option.setNumber("General.Terminal", 0)

    # Import the mesh in dolfinx
    from dolfinx.io import gmshio
    domain, cell_tags, facet_tags = gmshio.model_to_mesh(model, mesh_comm, model_rank, gdim=gdim)
    domain.name = "composite"
    cell_tags.name = f"{domain.name}_cells"
    facet_tags.name = f"{domain.name}_facets"
    gmsh.finalize()

    # Defines integrals

    ds = ufl.Measure("ds", domain=domain, subdomain_data=facet_tags) #we specify that we want to look at the facets and not the cells
    dx = ufl.Measure("dx", domain=domain, subdomain_data=cell_tags)
    one = dolfinx.fem.Constant(domain,ScalarType(1.))

    area_inclusion = dolfinx.fem.assemble_scalar(dolfinx.fem.form( one * dx(1))) # in fenics when we want to do an integration, 
    #we use "assemble" wich means integration (it is a sum), and we specify on what we shall integrate " * dx "
    area_matrix = dolfinx.fem.assemble_scalar(dolfinx.fem.form( one * dx(2)))
    area_domain = dolfinx.fem.assemble_scalar(dolfinx.fem.form(one * ufl.dx))


    
    # RESOLUTION
    V = dolfinx.fem.VectorFunctionSpace(domain,("CG", dim_func),dim=2)

    def eps(u):
        return ufl.sym(ufl.grad(u))

    I2 = ufl.Identity(2)

    # Hooke's law is written as the top of this notebook
    def sigma(eps, E, nu):
        mu = E/(2*(1+nu))
        lamb = 2*mu*nu/(1-2*nu)
        return lamb*ufl.tr(eps)*I2 + 2*mu*eps

    u = ufl.TrialFunction(V)
    u_bar = ufl.TestFunction(V)

    bilinear_form_inclusion = ufl.inner(sigma(eps(u),E_i, nu_i), eps(u_bar)) * dx(1)
    bilinear_form_matrix = ufl.inner(sigma(eps(u),E_m, nu_m), eps(u_bar)) * dx(2)
    bilinear_form = bilinear_form_inclusion + bilinear_form_matrix
    g=0.0 # no weight
    body_force = dolfinx.fem.Constant(domain, ScalarType((0,-g)))
    linear_form = ( ufl.dot(body_force,u_bar)  ) * ufl.dx

    # this finds the label of the degree of freedom for the nodes on the boundary facets
    outer_facets = facet_tags.find(2)
    #print("tags:", outer_facets)
    outer_boundary_dofs = dolfinx.fem.locate_dofs_topological(V, 1, outer_facets) #the "1" specify the dimension of our outer facets. 
    #Since problem is plan, we have facets of dimension 1
    #print("dofs:",outer_boundary_dofs)

    uD = dolfinx.fem.Function(V)
    u_on_boundary = lambda x: np.array([-x[1], -x[0]], dtype=ScalarType) # we impose our displacement (-y, -x)
    uD.interpolate(u_on_boundary)

    bc = dolfinx.fem.dirichletbc(uD, outer_boundary_dofs) #we don't use V here, maybe because u-D is created via V

    problem = dolfinx.fem.petsc.LinearProblem(bilinear_form, linear_form, bcs=[bc], 
                                              petsc_options={"ksp_type": "preonly", "pc_type": "lu"})
    u_solution = problem.solve()


    return u_solution,V,domain, dx

And

def errOnU(n_mesh=5, R_i=1., R_e=6.9, aspect_ratio=1., E_m=0.8, nu_m=0.35, E_i=11., nu_i=0.3, mesh_order=1, dim_func=1, dim_geom=2):
                    



    u_solution,V,domain, dx=solve(n_mesh=n_mesh, 
                    R_i=R_i, 
                    R_e=R_e,
                    aspect_ratio=aspect_ratio,
                    E_m=E_m,
                    nu_m=nu_m,
                    E_i=E_i,
                    nu_i=nu_i,
                    mesh_order=mesh_order,
                    dim_func=dim_func,
                    dim_geom=dim_geom)
    
    
    
    solution = EshelbyDisk(V,R_e/R_i, E_i/E_m, nu_i, nu_m)
    u_ref_func = solution.to_function(R_i)
    
    
    #L2 norm of error
    comm = u_solution.function_space.mesh.comm
    error = dolfinx.fem.form((u_solution - u_ref_func)**2 * ufl.dx)
    E = np.sqrt(comm.allreduce(dolfinx.fem.assemble_scalar(error), MPI.SUM))

    return E

def hErrOnU(n_mesh=5, R_i=1., R_e=6.9, aspect_ratio=1., E_m=0.8, nu_m=0.35, E_i=11., nu_i=0.3, mesh_order=1, dim_func=1, dim_geom=2):
                    



    u_solution,V, domain, dx=solve(n_mesh=n_mesh, 
                    R_i=R_i, 
                    R_e=R_e,
                    aspect_ratio=aspect_ratio,
                    E_m=E_m,
                    nu_m=nu_m,
                    E_i=E_i,
                    nu_i=nu_i,
                    mesh_order=mesh_order,
                    dim_func=dim_func,
                    dim_geom=dim_geom)
    
    
    
    solution = EshelbyDisk(V,R_e/R_i, E_i/E_m, nu_i, nu_m)
    u_ref_func = solution.to_function(R_i)
    
    
    #H1 norm of error
    comm = u_solution.function_space.mesh.comm
    eh=u_solution-u_ref_func
    error_H10 = dolfinx.fem.form(ufl.inner(ufl.grad(eh), ufl.grad(eh)) * ufl.dx)
    E_H10 = np.sqrt(comm.allreduce(dolfinx.fem.assemble_scalar(error_H10), op=MPI.SUM))

    return E_H10


AND

Ns_func = np.linspace(5,20,15)
Es = np.zeros(len(Ns_func), dtype=ScalarType)
hs_func = np.zeros(len(Ns_func), dtype=np.float64)
list_rates_func = []
list_Es_func = []
dims_func = [1,2]
for n in range(len(dims_func)):
    dim = dims_func[n]
    for i, N in enumerate(Ns_func):
        Es[i] = errOnU(n_mesh=N, mesh_order=2, dim_func=dim)
        hs_func[i] = 1./Ns_func[i]
        
    list_Es_func.append(list(Es))
    rates = np.log(Es[1:]/Es[:-1])/np.log(hs_func[1:]/hs_func[:-1])
    list_rates_func.append(list(rates))

When I print list_rates_func I get :

... [[2.011665794652411, 1.6850005934011927, 1.8840370945040423, 1.966635297510356, 1.9576644014104467, 1.8030249817794606, 2.181434530061626, 1.7213391889052085, 1.7943062319011487, 2.2923098171232454, 1.9922249700633368, 1.5595450180285453, 1.852164312335911, 2.165495469948245], [3.826932420683513, 3.0349402932021596, 3.302177004609476, 3.2509235502998917, 3.412379530596236, 3.472389422791963, 4.127164694728427, 2.567438267394478, 3.6218380077435124, 3.350413987907959, 3.618251844630758, 2.682745423225503, 3.2143056438838937, 4.9182560160857]]

I don’t understand what you expect from this.

You are not refining your mesh by refining the cells, (which is not trivial for higher order cells), so you generate a new mesh for each refinement. This means that you will not get machine precision convergence rates (say 3.99, 2.99, 1.99, 0.99) etc, as the boundary is resolved differently on each refinement level.

It means my code and my graph is not correct or for this kind of problem i can’t get machine precision convergence rate ?

You should be able to see a convergence rate, But it might fluctuate between say 1.8 and 2.2 if you dont use an actually refined mesh (But a newly generated mesh with a smoother boundary and different elements in different places). It is up to you to determine if the rate is acceptable, both with respect to the size of the error and the rate of reduction.

Normally I should find a convergence rate between 1.00 and 3.00. Because the theory we find H1: 1 for P1 and 2 for P2. Concerning L2: 2 for P1 and 3 for P2

The 1.8 and 2.2 you are talking about is only for L2? If yes, how do you know it fluctuates between 1.8 and 2.2 ?

The point is that if the theory says the convergence rate is 2, then you can expect rates between 1.8 and 2.2 if you have implemented everything correctly with your current mesh setup.

Similarly, if the expected rate is 3, it should normally be between 2.8 and 3.2 etc.

But I don’t understand how to get these values of which you said (fluctuate between say 1.8 and 2.2 ) with my code

Well, you more or less Get that

These are the rates for the 1st order element

These are the one for the 2nd order

ok so i can understand for lagrange P1 of the L2 norm to have 2.16 but for lagrange P2 of the L2 norm is not understandable for me to have 4.91. This is much too far from the theoretical value which is 3.00.

I don’t understand why I get such a high value, maybe the fact of having put the mesh_size = 2 and not =1?

I would suggest you try to simplify your problem to debug it. There are many things that could be wrong. For instance

are making assumption on what cell size Gmsh generates, and should rather be done with
The Max value of dolfinx.cpp.mesh — DOLFINx 0.6.0 documentation
See https://github.com/FEniCS/dolfinx/blob/v0.6.0/python/test/unit/mesh/test_mesh.py#L357 for usage with v0.6.0

I use fenicsx 0.5.1. After setting def test_hmin_hmax, I delete hs_func[i] and replace with ?

sorry i don’t understand. The mesh is supposed to be the same everywhere, namely triangular;

Ns_func = np.linspace(5,20,15)
Es = np.zeros(len(Ns_func), dtype=ScalarType)
hs_func = np.zeros(len(Ns_func), dtype=np.float64)
list_rates_func = []
list_Es_func = []
dims_func = [1,2]
for n in range(len(dims_func)):
    dim = dims_func[n]
    for i, N in enumerate(Ns_func):
        Es[i] = errOnU(n_mesh=N, mesh_order=2, dim_func=dim)
        hs_func[i] = 1./Ns_func[i]
        
    list_Es_func.append(list(Es))