Why is my code converging in 4 steps? Linear PDE

Hi FEniCSxers,
I am solving the linear PDE \nabla \cdot \vec J =0 using a non linear solver (because I already have the code for non linear problems, which I do solve afterwards). Here, the current density \vec J = - \sigma \nabla V, where \sigma is the electric conductivity and V is the voltage (a scalar field).

My mesh can be generated with the following geo file:

h  = 0.002415;
r1 = 0.3;
r2 = 0.1;
Point(0) = {r1,0,0,h};
Point(1) = {0,0,0,h};
Point(2) = {r2,0,0,h};
Point(3) = {r1,r1-r2,0,h};
Point(4) = {r1,r1,0,h};
Line(1) = {1,2};
Circle(2) = {2,0,3};
Line(3) = {3,4};
Circle(4) = {4,0,1};
Line Loop(1) = {1,2,3,4};
Plane Surface(1) = {1};
Physical Curve("bottom_left", 12) = {1};
Physical Curve("top_right", 13) = {3};
Physical Surface("material", 11) = {1};

And my FEniCSx code is:

from dolfinx import log
from dolfinx.fem import (Constant, Function, functionspace, FunctionSpace, assemble_scalar,
                         dirichletbc, form, locate_dofs_geometrical, VectorFunctionSpace,
                         locate_dofs_topological, Expression)
from dolfinx.fem.petsc import NonlinearProblem
from dolfinx.io import XDMFFile, gmshio
from dolfinx.nls.petsc import NewtonSolver
from mpi4py import MPI
from petsc4py import PETSc
from petsc4py.PETSc import ScalarType
from ufl import (FiniteElement, Measure, MixedElement, SpatialCoordinate,
                 TestFunction, TrialFunction, as_tensor, dot, dx, grad, inner,
                 inv, split, derivative, FacetNormal, div)
from mpi4py import MPI
import gmsh
from dolfinx.io import gmshio, VTXWriter, XDMFFile
mesh, cell_markers, facet_markers = gmshio.read_from_msh("meshes/quarter_annulus.msh", MPI.COMM_WORLD, gdim=2)

# Reminder of the mesh:
#Physical Surface("material", 11) = {2, 1, 3};
#Physical Curve("bottom_left", 12) = {10};
#Physical Curve("top_right", 13) = {6};


def resistance(meshfile):
    mesh, cell_markers, facet_markers = gmshio.read_from_msh("meshes/quarter_annulus.msh", MPI.COMM_WORLD, gdim=2)
    
    the_current = 1.0
    current_curves=(12,13)
    voltages_curves=(12,13)
    inj_current_curve, out_current_curve = current_curves
    reading_voltage_curve_0, reading_voltage_curve_1 = voltages_curves

    # Define FE function space
    deg = 3
    el = FiniteElement("CG", mesh.ufl_cell(), deg)
    V = FunctionSpace(mesh, el)    

    v = TestFunction(V)
    volt = Function(V)
  
    rho = 1
    sigma = 1.0 / rho

    # Define the boundary conditions
    left_facets = facet_markers.find(inj_current_curve)
    right_facets = facet_markers.find(out_current_curve)
    left_dofs = locate_dofs_topological(V, mesh.topology.dim-1, left_facets)
    V_current_contact_out = 0
    bcs = [dirichletbc(ScalarType(V_current_contact_out), left_dofs, V)]
    
    
    x = SpatialCoordinate(mesh)
    dx = Measure("dx", domain=mesh, subdomain_data=cell_markers)
    ds = Measure("ds", domain=mesh, subdomain_data=facet_markers)

    J = the_current / assemble_scalar(form(1 * ds(inj_current_curve, domain=mesh)))

    # Weak form.
    J_vector = -sigma * grad(volt)
    F_V = -dot(grad(v), sigma * grad(volt))*dx - v * J * ds(out_current_curve) + v * J * ds(inj_current_curve) 
    weak_form = F_V

    # Solve the PDE.
    from dolfinx.fem.petsc import NonlinearProblem
    from dolfinx.nls.petsc import NewtonSolver

    print(f''' ------- Pre-processing --------
    Current density: {J}
    Length of the side where current is injected: {assemble_scalar(form(1 * ds(inj_current_curve, domain=mesh)))}
    Length of the side where current leaves the wire: {assemble_scalar(form(1 * ds(out_current_curve, domain=mesh)))}
    This should correspond to the current injected: {assemble_scalar(form(J*ds(out_current_curve)))}
    ''')

    # Solve the PDE.
    problem = NonlinearProblem(weak_form, volt)
    solver = NewtonSolver(MPI.COMM_WORLD, problem)
    solver.convergence_criterion = "incremental"
    solver.rtol = 1e-14
    solver.report = True
    solver.max_it = 10000

    ksp = solver.krylov_solver
    opts = PETSc.Options()
    option_prefix = ksp.getOptionsPrefix()
    opts[f"{option_prefix}ksp_type"] = "preonly"
    opts[f"{option_prefix}pc_type"] = "ksp" # Quick code.
    opts[f"{option_prefix}pc_factor_mat_solver_type"] = "mumps"
    ksp.setFromOptions()
    
    log.set_log_level(log.LogLevel.WARNING)
    n, converged = solver.solve(volt)
    assert(converged)
    print(f'''------- Processing --------
    Number of interations: {n:d}''')

    avg_voltage_curve_integral_0 = assemble_scalar(form(volt * ds(reading_voltage_curve_0, domain=mesh))) / assemble_scalar(form(1 * ds(reading_voltage_curve_0, domain=mesh)))
    avg_voltage_curve_integral_1 = assemble_scalar(form(volt * ds(reading_voltage_curve_1, domain=mesh))) / assemble_scalar(form(1 * ds(reading_voltage_curve_1, domain=mesh)))
    voltage_diff = avg_voltage_curve_integral_1 - avg_voltage_curve_integral_0
    resistance = voltage_diff / assemble_scalar(form(J*ds(out_current_curve)))
    return resistance

res = resistance("meshes/quarter_annulus.msh")
print(res)

When I actually use the Dirichlet boundary condition (I add bcs = bcs in the “problem = 
” line), I get the output:

Info    : Reading 'meshes/quarter_annulus.msh'...
Info    : 10 entities
Info    : 8160 nodes
Info    : 15991 elements
Info    : Done reading 'meshes/quarter_annulus.msh'
Info    : Reading 'meshes/quarter_annulus.msh'...
Info    : 10 entities
Info    : 8160 nodes
Info    : 15991 elements
Info    : Done reading 'meshes/quarter_annulus.msh'
 ------- Pre-processing --------
    Current density: 10.0
    Length of the side where current is injected: 0.1
    Length of the side where current leaves the wire: 0.09999999999999998
    This should correspond to the current injected: 0.9999999999999998
    
------- Processing --------
    Number of interations: 6
-3.8782999166063776

When I simply remove the Dirichlet boundary condition, the problem is physically exactly the same (because voltage is defined up to a constant, and here I let the solver fix it instead of enforcing it myself), however the output is:

deg 3 (no bcs)
Info    : Reading 'meshes/quarter_annulus.msh'...
Info    : 10 entities
Info    : 8160 nodes
Info    : 15991 elements
Info    : Done reading 'meshes/quarter_annulus.msh'
Info    : Reading 'meshes/quarter_annulus.msh'...
Info    : 10 entities
Info    : 8160 nodes
Info    : 15991 elements
Info    : Done reading 'meshes/quarter_annulus.msh'
 ------- Pre-processing --------
    Current density: 10.0
    Length of the side where current is injected: 0.1
    Length of the side where current leaves the wire: 0.09999999999999998
    This should correspond to the current injected: 0.9999999999999998
    
------- Processing --------
    Number of interations: 4
-3.882602522711037

Questions:

  1. Why do I get 6 iteration in the 1st case, and 4 iterations in the second case? I have been told several times that a linear problem should converge within 1 step, even with the non linear solver. But this is not the case? What is going on?

  2. Why do I get slightly different results if I use the Dirichlet boundary condition vs not using it? Physically, it is the same problem. I increased the degree of the FE space V from 1 to 3 and they do converge to different values. Which one should I trust more?

Start by changing the convergence criterion from «incremental» to residual.

The solver should then know about the null-space, Which you can enforce through options, see for instance

in

1 Like

Thanks a lot dokken for your reply. Changing the convergence criterion makes the code converge in 2 steps, which is better but not perfect.
Regarding the null space, I added the 2 lines:

    nullspace = PETSc.NullSpace().create(constant=True)  # type: ignore
    PETSc.Mat.setNearNullSpace(problem.A, nullspace)

but I got an error along the lines of “attribute A doesn’t exist for Nonlinearproblem, did you mean a?”.

I replaced A by a, but got another error. I don’t think it’s worth reporting here. I have checked in dolfinx source code and A and a seem to be slightly different things. A seems to be a matrix form of a. It means I have to explicitly convert a ufl form to a matrix with fem.create_matrix()?

As you haven’t provided a minimal reproducible example, I will only give you a partial answer.
Here is a mwe:

from dolfinx import log
from dolfinx.fem import (Function, FunctionSpace, assemble_scalar,
                         form,)
from dolfinx.fem.petsc import NonlinearProblem
from dolfinx.nls.petsc import NewtonSolver
from mpi4py import MPI
from petsc4py import PETSc

from ufl import (FiniteElement, Measure,
                 TestFunction, dot, Measure, grad)
from mpi4py import MPI
from dolfinx.mesh import create_unit_square


def resistance(meshfile):
    mesh = create_unit_square(MPI.COMM_WORLD, 20, 20)
    the_current = 1.0

    # Define FE function space
    deg = 1
    el = FiniteElement("CG", mesh.ufl_cell(), deg)
    V = FunctionSpace(mesh, el)

    v = TestFunction(V)
    volt = Function(V)

    rho = 1
    sigma = 1.0 / rho

    dx = Measure("dx", domain=mesh)
    ds = Measure("ds", domain=mesh)

    J = the_current / \
        mesh.comm.allreduce(assemble_scalar(form(1 * ds)), op=MPI.SUM)

    # Weak form.
    F_V = -dot(grad(v), sigma * grad(volt))*dx - v * J * \
        ds
    weak_form = F_V

    # Solve the PDE.
    problem = NonlinearProblem(weak_form, volt)
    solver = NewtonSolver(MPI.COMM_WORLD, problem)
    solver.convergence_criterion = "residual"
    solver.rtol = 1e-14
    solver.report = True
    solver.max_it = 10000

    ksp = solver.krylov_solver
    opts = PETSc.Options()
    option_prefix = ksp.getOptionsPrefix()
    opts[f"{option_prefix}ksp_type"] = "preonly"
    opts[f"{option_prefix}pc_type"] = "lu"
    opts[f"{option_prefix}pc_factor_mat_solver_type"] = "mumps"
    # Option to support solving a singular matrix
    opts[f"{option_prefix}mat_mumps_icntl_24"] = 1
    opts[f"{option_prefix}mat_mumps_icntl_25"] = 0
    ksp.setFromOptions()
    nullspace = PETSc.NullSpace().create(constant=True)  # type: ignore
    PETSc.Mat.setNearNullSpace(solver.A, nullspace)
    log.set_log_level(log.LogLevel.INFO)
    n, converged = solver.solve(volt)
    assert (converged)
    print(f'''------- Processing --------
    Number of interations: {n:d}''')


res = resistance("meshes/quarter_annulus.msh")
print(res)

2024-03-30 09:19:19.965 (   0.128s) [main            ]       NewtonSolver.cpp:36    INFO| Newton iteration 0: r (abs) = 0.111803 (tol = 1e-10) r (rel) = inf(tol = 1e-14)
2024-03-30 09:19:19.965 (   0.129s) [main            ]              petsc.cpp:698   INFO| PETSc Krylov solver starting to solve system.
2024-03-30 09:19:19.967 (   0.131s) [main            ]       NewtonSolver.cpp:36    INFO| Newton iteration 1: r (abs) = 0.522516 (tol = 1e-10) r (rel) = 6.35357e-16(tol = 1e-14)
2024-03-30 09:19:19.967 (   0.131s) [main            ]       NewtonSolver.cpp:255   INFO| Newton solver finished in 1 iterations and 1 linear solver iterations.
------- Processing --------
    Number of interations: 1

I will only give further feedback once you have made an effort to solve your problem with a minimal reproducible example

1 Like

I really appreciate your feedback, dokken. I understand about the MWE. In your MWE though, you seem to inject a current (J) but it doesn’t leave the material, or at least I don’t see where. I will stick to my weak form for now at least.

Thanks to your code, I am now able to reach a convergence in 1 step. I have many questions. And some observations.
First, I get a convergence in 1 step, regardless whether I choose to use the bcs or not, which is a good thing.
[comment 0] I still get a different value for the resistance: about -3.88 vs -3.87, this is very strange to me. I am expecting a much better agreement, especially with the solver tolerance. In order to investigate which value I could trust, I have set up a different mesh, a simple square where I inject the current on 1 side, and it leaves on the other side. I think this is very similar to your MWE. The geo file is

SetFactory("OpenCASCADE");
Rectangle(1) = {0, 0, 0, 1, 1, 0};
Physical Curve("left_end", 5) = {4};
Physical Curve("right_end", 6) = {2};
Physical Surface("rod", 7) = {1};

Injecting a current of 1 A, with a resistivity of 1 ohm m, this makes the resistance worth 1 ohm. I therefore tested my code, but adapted to this mesh:

from dolfinx import log
from dolfinx.fem import (Constant, Function, FunctionSpace, assemble_scalar,
                         dirichletbc, form, locate_dofs_geometrical, VectorFunctionSpace,
                         locate_dofs_topological, Expression)
from dolfinx.fem.petsc import NonlinearProblem
from dolfinx.io import XDMFFile, gmshio
from dolfinx.nls.petsc import NewtonSolver
from mpi4py import MPI
from petsc4py import PETSc
from petsc4py.PETSc import ScalarType
from ufl import (FiniteElement, Measure, MixedElement, SpatialCoordinate,
                 TestFunction, TrialFunction, as_tensor, dot, dx, grad, inner,
                 inv, split, derivative, FacetNormal, div)
from mpi4py import MPI
import gmsh
from dolfinx.io import gmshio, VTXWriter, XDMFFile


# Reminder of the mesh:
#Physical Curve("left_end", 5) = {4};
#Physical Curve("right_end", 6) = {2};
#Physical Surface("rod", 7) = {1};

def resistance(meshfile):
    mesh, cell_markers, facet_markers = gmshio.read_from_msh(meshfile, MPI.COMM_WORLD, gdim=2)
    
    the_current = 1.0
    current_curves=(5,6)
    voltages_curves=(5,6)
    inj_current_curve, out_current_curve = current_curves
    reading_voltage_curve_0, reading_voltage_curve_1 = voltages_curves

    # Define FE function space
    deg = 1
    el = FiniteElement("CG", mesh.ufl_cell(), deg)
    V = FunctionSpace(mesh, el)    

    v = TestFunction(V)
    volt = Function(V)
  
    rho = 1
    sigma = 1.0 / rho

    # Define the boundary conditions
    left_facets = facet_markers.find(inj_current_curve)
    right_facets = facet_markers.find(out_current_curve)
    left_dofs = locate_dofs_topological(V, mesh.topology.dim-1, left_facets)
    V_current_contact_out = 0
    bcs = [dirichletbc(ScalarType(V_current_contact_out), left_dofs, V)]
    
    
    x = SpatialCoordinate(mesh)
    dx = Measure("dx", domain=mesh, subdomain_data=cell_markers)
    ds = Measure("ds", domain=mesh, subdomain_data=facet_markers)

    J = the_current / assemble_scalar(form(1 * ds(inj_current_curve, domain=mesh)))
    #J = the_current / mesh.comm.allreduce(assemble_scalar(form(1 * ds)), op=MPI.SUM)

    # Weak form.
    F_V = -dot(grad(v), sigma * grad(volt))*dx - v * J * ds(out_current_curve) + v * J * ds(inj_current_curve) 
    weak_form = F_V

    # Solve the PDE.
    from dolfinx.fem.petsc import NonlinearProblem
    from dolfinx.nls.petsc import NewtonSolver


    print(f''' ------- Pre-processing --------
    Current density: {J}
    Length of the side where current is injected: {assemble_scalar(form(1 * ds(inj_current_curve, domain=mesh)))}
    Length of the side where current leaves the wire: {assemble_scalar(form(1 * ds(out_current_curve, domain=mesh)))}
    This should correspond to the current injected: {assemble_scalar(form(J*ds(out_current_curve)))}
    ''')


    # Solve the PDE.
    problem = NonlinearProblem(weak_form, volt)
    solver = NewtonSolver(MPI.COMM_WORLD, problem)

    ksp = solver.krylov_solver
    opts = PETSc.Options()
    option_prefix = ksp.getOptionsPrefix()
    opts[f"{option_prefix}ksp_type"] = "preonly"
    opts[f"{option_prefix}pc_type"] = "lu"
    opts[f"{option_prefix}pc_factor_mat_solver_type"] = "mumps"
    # Option to support solving a singular matrix
    opts[f"{option_prefix}mat_mumps_icntl_24"] = 1
    opts[f"{option_prefix}mat_mumps_icntl_25"] = 0
    ksp.setFromOptions()
    nullspace = PETSc.NullSpace().create(constant=True)  # type: ignore
    PETSc.Mat.setNearNullSpace(solver.A, nullspace)
    log.set_log_level(log.LogLevel.INFO)
    n, converged = solver.solve(volt)
    assert (converged)
    print(f'''------- Processing --------
    Number of interations: {n:d}''')

    avg_voltage_curve_integral_0 = assemble_scalar(form(volt * ds(reading_voltage_curve_0, domain=mesh))) / assemble_scalar(form(1 * ds(reading_voltage_curve_0, domain=mesh)))
    avg_voltage_curve_integral_1 = assemble_scalar(form(volt * ds(reading_voltage_curve_1, domain=mesh))) / assemble_scalar(form(1 * ds(reading_voltage_curve_1, domain=mesh)))
    voltage_diff = avg_voltage_curve_integral_1 - avg_voltage_curve_integral_0
    resistance = voltage_diff / assemble_scalar(form(J*ds(out_current_curve)))
    return resistance

res = resistance("meshes/rectangle.msh")
print(res)

Surprisingly, with or without the bcs, I get almost the same result: -1.0000000000000044 without bcs versus -1.0000000000000033 if I make the change to problem = NonlinearProblem(weak_form, volt, bcs=bcs). Both results are correct here, I unfortunately cannot say one is wrong, unlike with my original problem where there is a clearer discrepancy.

[comment 1] I have also commented these 2 lines: opts[f"{option_prefix}mat_mumps_icntl_24"] = 1 opts[f"{option_prefix}mat_mumps_icntl_25"] = 0 but I didn’t see any difference, the results looked the same.

[comment 2] I suppose your change in the current: from J = the_current / assemble_scalar(form(1 * ds(inj_current_curve, domain=mesh))) to J = the_current / mesh.comm.allreduce(assemble_scalar(form(1 * ds)), op=MPI.SUM) makes the code execute in parallel over several cores. However, since I print (J) as a “debug” message, this makes it “wrong”, I got 0.25 instead of 1.0, I am guessing this is because I have 4 cores and because I didn’t execute the code with mpi. Could you confirm? Since I don’t care that much for speed for now, I stick with my original version, but that’s good to know, thanks a lot.

[comment 3] I still don’t know how you managed to get my code to converge within 1 step. I guess it’s due to the NullSpace code that you added that made it.

[comment 4] To get a MWE, you can use my code in post #1, with the following file.msh (I have made it coarser so I can paste it here):

$MeshFormat
4.1 0 8
$EndMeshFormat
$PhysicalNames
3
1 12 "bottom_left"
1 13 "top_right"
2 11 "material"
$EndPhysicalNames
$Entities
5 4 1 0
0 0.3 0 0 0 
1 0 0 0 0 
2 0.1 0 0 0 
3 0.3 0.2 0 0 
4 0.3 0.3 0 0 
1 0 0 0 0.1 0 0 1 12 2 1 -2 
2 0.1000000000000001 0 0 0.3 0.2 0 0 2 2 -3 
3 0.3 0.2 0 0.3 0.3 0 1 13 2 3 -4 
4 0 2.775557561562891e-17 0 0.3 0.3 0 0 2 4 -1 
1 0 0 0 0.3 0.3 0 1 11 4 1 2 3 4 
$EndEntities
$Nodes
9 49 1 49
0 1 0 1
1
0 0 0
0 2 0 1
2
0.1 0 0
0 3 0 1
3
0.3 0.2 0
0 4 0 1
4
0.3 0.3 0
1 1 0 2
5
6
0.03333333333325102 0 0
0.06666666666657722 0 0
1 2 0 7
7
8
9
10
11
12
13
0.1038429439423803 0.03901806451898689 0
0.1152240935962743 0.07653668671089414 0
0.1337060777469523 0.1111140469144082 0
0.1585786441003551 0.141421356574974 0
0.1888859536863945 0.1662939226544912 0
0.2234633137657499 0.1847759066011582 0
0.2609819357260938 0.1961570561063693 0
1 3 0 2
14
15
0.3 0.2333333333332429 0
0.3 0.2666666666665726 0
1 4 0 11
16
17
18
19
20
21
22
23
24
25
26
0.2608421422178005 0.2974334583968472 0
0.222354286247288 0.2897777478272476 0
0.1851949699472492 0.277163859611218 0
0.149999999603689 0.2598076209065213 0
0.1173715708373174 0.2380060017343492 0
0.08786796515531267 0.2121320338672412 0
0.06199399755946106 0.1826284282423582 0
0.04019237862615604 0.1499999995868845 0
0.02283614010033391 0.1148050293563756 0
0.01022225204799443 0.07764571328710894 0
0.002566541571773218 0.03915785754384799 0
2 1 0 23
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
0.0413979112448767 0.05504386686537008 0
0.2478738417556512 0.2607948122182109 0
0.1172559504083849 0.1501325043815482 0
0.189491751213472 0.2035707026394489 0
0.06199399765141789 0.1173715710024526 0
0.1497187591145245 0.1825592211061508 0
0.2259535988020302 0.2245187058007437 0
0.07650739812244896 0.0720990456391345 0
0.1534697101733109 0.2210249121950804 0
0.1146622665772489 0.1853377342766507 0
0.1869726972776099 0.2409512587658482 0
0.09733385725251832 0.1083273105897004 0
0.07049813283566345 0.03352290142560477 0
0.04259153983341438 0.08739304523008834 0
0.07792997143130471 0.1471803928066659 0
0.2583016659779281 0.2296274481583706 0
0.2767193656010433 0.2489749153694481 0
0.2148288145545207 0.2589214124439176 0
0.04289251713042833 0.02554492516696457 0
0.2797986583751052 0.2184712890889483 0
0.2766891191412518 0.2751340016931657 0
0.1261616008303159 0.2111500161994755 0
0.08880964708989009 0.1746225430370515 0
$EndNodes
$Elements
3 76 1 76
1 1 1 3
1 1 5 
2 5 6 
3 6 2 
1 3 1 3
4 3 14 
5 14 15 
6 15 4 
2 1 2 70
7 26 1 5 
8 26 5 45 
9 31 34 38 
10 38 29 41 
11 34 31 40 
12 6 2 39 
13 19 35 37 
14 2 7 39 
15 32 30 35 
16 12 13 33 
17 29 10 32 
18 11 30 32 
19 18 19 37 
20 34 8 38 
21 35 30 37 
22 31 38 41 
23 29 32 36 
24 9 29 38 
25 27 26 45 
26 33 13 42 
27 16 17 28 
28 25 26 27 
29 19 20 35 
30 7 8 34 
31 30 12 33 
32 41 29 49 
33 23 24 31 
34 9 10 29 
35 11 12 30 
36 10 11 32 
37 27 34 40 
38 33 28 44 
39 28 33 42 
40 34 27 39 
41 30 33 37 
42 8 9 38 
43 25 27 40 
44 7 34 39 
45 32 35 48 
46 4 16 47 
47 13 3 46 
48 31 24 40 
49 24 25 40 
50 23 31 41 
51 22 23 41 
52 29 36 49 
53 5 6 45 
54 36 32 48 
55 17 18 44 
56 15 4 47 
57 20 21 48 
58 6 39 45 
59 21 22 49 
60 21 36 48 
61 16 28 47 
62 36 21 49 
63 14 15 43 
64 28 17 44 
65 18 37 44 
66 37 33 44 
67 35 20 48 
68 3 14 46 
69 14 43 46 
70 39 27 45 
71 28 42 43 
72 43 42 46 
73 28 43 47 
74 22 41 49 
75 42 13 46 
76 43 15 47 
$EndElements

With this coarse mesh, without the bcs part, I get:

Info    : Reading 'meshes/quarter_annulus.msh'...
Info    : 10 entities
Info    : 49 nodes
Info    : 76 elements
Info    : Done reading 'meshes/quarter_annulus.msh'

2024-03-30 14:23:51.312 (3259.473s) [main            ]              utils.cpp:612   INFO| Compute partition of cells across ranks
2024-03-30 14:23:51.312 (3259.473s) [main            ]         graphbuild.cpp:533   INFO| Building mesh dual graph
2024-03-30 14:23:51.312 (3259.473s) [main            ]         graphbuild.cpp:396   INFO| Build local part of mesh dual graph
2024-03-30 14:23:51.312 (3259.473s) [main            ]         graphbuild.cpp:89    INFO| Build nonlocal part of mesh dual graph
2024-03-30 14:23:51.312 (3259.473s) [main            ]         graphbuild.cpp:545   INFO| Graph edges (local: 184, non-local: 0)
2024-03-30 14:23:51.312 (3259.473s) [main            ]       partitioners.cpp:316   INFO| Compute graph partition using PT-SCOTCH
2024-03-30 14:23:51.313 (3259.473s) [main            ]                MPI.cpp:155   INFO| Computing communicaton graph edges (using NBX algorithm). Number of input edges: 1
2024-03-30 14:23:51.313 (3259.474s) [main            ]                MPI.cpp:218   INFO| Finished graph edge discovery using NBX algorithm. Number of discovered edges 1
2024-03-30 14:23:51.313 (3259.474s) [main            ]         graphbuild.cpp:396   INFO| Build local part of mesh dual graph
2024-03-30 14:23:51.313 (3259.474s) [main            ]           ordering.cpp:202   INFO| GPS pseudo-diameter:(23) 39-0

2024-03-30 14:23:51.313 (3259.474s) [main            ]           Topology.cpp:923   INFO| Create topology
2024-03-30 14:23:51.313 (3259.474s) [main            ]                MPI.cpp:155   INFO| Computing communicaton graph edges (using NBX algorithm). Number of input edges: 0
2024-03-30 14:23:51.313 (3259.474s) [main            ]                MPI.cpp:218   INFO| Finished graph edge discovery using NBX algorithm. Number of discovered edges 0
2024-03-30 14:23:51.313 (3259.474s) [main            ]          partition.cpp:232   INFO| Compute ghost indices
2024-03-30 14:23:51.313 (3259.474s) [main            ]                MPI.cpp:155   INFO| Computing communicaton graph edges (using NBX algorithm). Number of input edges: 0
2024-03-30 14:23:51.314 (3259.474s) [main            ]                MPI.cpp:218   INFO| Finished graph edge discovery using NBX algorithm. Number of discovered edges 0
2024-03-30 14:23:51.314 (3259.474s) [main            ]                MPI.cpp:155   INFO| Computing communicaton graph edges (using NBX algorithm). Number of input edges: 0
2024-03-30 14:23:51.314 (3259.474s) [main            ]                MPI.cpp:218   INFO| Finished graph edge discovery using NBX algorithm. Number of discovered edges 0
2024-03-30 14:23:51.314 (3259.474s) [main            ]                MPI.cpp:155   INFO| Computing communicaton graph edges (using NBX algorithm). Number of input edges: 0
2024-03-30 14:23:51.314 (3259.474s) [main            ]                MPI.cpp:218   INFO| Finished graph edge discovery using NBX algorithm. Number of discovered edges 0
2024-03-30 14:23:51.314 (3259.474s) [main            ]                MPI.cpp:155   INFO| Computing communicaton graph edges (using NBX algorithm). Number of input edges: 0
2024-03-30 14:23:51.314 (3259.475s) [main            ]                MPI.cpp:218   INFO| Finished graph edge discovery using NBX algorithm. Number of discovered edges 0
2024-03-30 14:23:51.314 (3259.475s) [main            ]                MPI.cpp:155   INFO| Computing communicaton graph edges (using NBX algorithm). Number of input edges: 0
2024-03-30 14:23:51.314 (3259.475s) [main            ]                MPI.cpp:218   INFO| Finished graph edge discovery using NBX algorithm. Number of discovered edges 0
2024-03-30 14:23:51.314 (3259.475s) [main            ]                MPI.cpp:155   INFO| Computing communicaton graph edges (using NBX algorithm). Number of input edges: 0
2024-03-30 14:23:51.314 (3259.475s) [main            ]                MPI.cpp:218   INFO| Finished graph edge discovery using NBX algorithm. Number of discovered edges 0
2024-03-30 14:23:51.314 (3259.475s) [main            ]                MPI.cpp:155   INFO| Computing communicaton graph edges (using NBX algorithm). Number of input edges: 0
2024-03-30 14:23:51.314 (3259.475s) [main            ]                MPI.cpp:218   INFO| Finished graph edge discovery using NBX algorithm. Number of discovered edges 0
2024-03-30 14:23:51.314 (3259.475s) [main            ]                  MPI.h:367   INFO| Number of neighbourhood source ranks in distribute_to_postoffice: 0
2024-03-30 14:23:51.314 (3259.475s) [main            ]                MPI.cpp:155   INFO| Computing communicaton graph edges (using NBX algorithm). Number of input edges: 0
2024-03-30 14:23:51.314 (3259.475s) [main            ]                MPI.cpp:218   INFO| Finished graph edge discovery using NBX algorithm. Number of discovered edges 0
2024-03-30 14:23:51.314 (3259.475s) [main            ]                  MPI.h:499   INFO| Neighbourhood destination ranks from post office in distribute_data (rank, num dests, num dests/mpi_size): 0, 0, 0
2024-03-30 14:23:51.315 (3259.475s) [main            ]         xdmf_utils.cpp:449   INFO| XDMF distribute entity data
2024-03-30 14:23:51.315 (3259.475s) [main            ]         xdmf_utils.cpp:499   INFO| XDMF send entity nodes size:(49)
2024-03-30 14:23:51.315 (3259.475s) [main            ]         xdmf_utils.cpp:580   INFO| XDMF send entity keys size: (70)
2024-03-30 14:23:51.315 (3259.475s) [main            ]         xdmf_utils.cpp:651   INFO| XDMF return entity and value data size:(70)
2024-03-30 14:23:51.315 (3259.475s) [main            ]         xdmf_utils.cpp:684   INFO| XDMF build map
2024-03-30 14:23:51.315 (3259.475s) [main            ]topologycomputation.cpp:786   INFO| Requesting connectivity 2 - 0
2024-03-30 14:23:51.315 (3259.476s) [main            ]             MeshTags.h:140   INFO| Building MeshTgas object from tagged entities (defined by vertices).
2024-03-30 14:23:51.315 (3259.476s) [main            ]           Topology.cpp:1204  INFO| Build list if mesh entity indices from the entity vertices.
2024-03-30 14:23:51.315 (3259.476s) [main            ]         xdmf_utils.cpp:449   INFO| XDMF distribute entity data
2024-03-30 14:23:51.315 (3259.476s) [main            ]         xdmf_utils.cpp:499   INFO| XDMF send entity nodes size:(49)
2024-03-30 14:23:51.315 (3259.476s) [main            ]         xdmf_utils.cpp:580   INFO| XDMF send entity keys size: (6)
2024-03-30 14:23:51.315 (3259.476s) [main            ]         xdmf_utils.cpp:651   INFO| XDMF return entity and value data size:(6)
2024-03-30 14:23:51.315 (3259.476s) [main            ]         xdmf_utils.cpp:684   INFO| XDMF build map
2024-03-30 14:23:51.315 (3259.476s) [main            ]topologycomputation.cpp:746   INFO| Computing mesh entities of dimension 1
2024-03-30 14:23:51.316 (3259.476s) [main            ]                MPI.cpp:155   INFO| Computing communicaton graph edges (using NBX algorithm). Number of input edges: 0
2024-03-30 14:23:51.316 (3259.476s) [main            ]                MPI.cpp:218   INFO| Finished graph edge discovery using NBX algorithm. Number of discovered edges 0
2024-03-30 14:23:51.316 (3259.477s) [main            ]                MPI.cpp:155   INFO| Computing communicaton graph edges (using NBX algorithm). Number of input edges: 0
2024-03-30 14:23:51.316 (3259.477s) [main            ]                MPI.cpp:218   INFO| Finished graph edge discovery using NBX algorithm. Number of discovered edges 0
2024-03-30 14:23:51.316 (3259.477s) [main            ]topologycomputation.cpp:786   INFO| Requesting connectivity 1 - 2
2024-03-30 14:23:51.316 (3259.477s) [main            ]topologycomputation.cpp:650   INFO| Computing mesh connectivity 1 - 2 from transpose.
2024-03-30 14:23:51.316 (3259.477s) [main            ]             MeshTags.h:140   INFO| Building MeshTgas object from tagged entities (defined by vertices).
2024-03-30 14:23:51.316 (3259.477s) [main            ]           Topology.cpp:1204  INFO| Build list if mesh entity indices from the entity vertices.
2024-03-30 14:23:51.324 (3259.484s) [main            ]topologycomputation.cpp:746   INFO| Computing mesh entities of dimension 0
2024-03-30 14:23:51.324 (3259.484s) [main            ]                MPI.cpp:155   INFO| Computing communicaton graph edges (using NBX algorithm). Number of input edges: 0
2024-03-30 14:23:51.324 (3259.484s) [main            ]                MPI.cpp:218   INFO| Finished graph edge discovery using NBX algorithm. Number of discovered edges 0
2024-03-30 14:23:51.324 (3259.485s) [main            ]                MPI.cpp:155   INFO| Computing communicaton graph edges (using NBX algorithm). Number of input edges: 0
2024-03-30 14:23:51.324 (3259.485s) [main            ]                MPI.cpp:218   INFO| Finished graph edge discovery using NBX algorithm. Number of discovered edges 0
2024-03-30 14:23:51.324 (3259.485s) [main            ]                MPI.cpp:155   INFO| Computing communicaton graph edges (using NBX algorithm). Number of input edges: 0
2024-03-30 14:23:51.324 (3259.485s) [main            ]                MPI.cpp:218   INFO| Finished graph edge discovery using NBX algorithm. Number of discovered edges 0
2024-03-30 14:23:51.324 (3259.485s) [main            ]                MPI.cpp:155   INFO| Computing communicaton graph edges (using NBX algorithm). Number of input edges: 0
2024-03-30 14:23:51.324 (3259.485s) [main            ]                MPI.cpp:218   INFO| Finished graph edge discovery using NBX algorithm. Number of discovered edges 0
2024-03-30 14:23:51.324 (3259.485s) [main            ]topologycomputation.cpp:786   INFO| Requesting connectivity 1 - 2
2024-03-30 14:23:51.330 (3259.491s) [main            ]topologycomputation.cpp:786   INFO| Requesting connectivity 1 - 2
2024-03-30 14:23:51.330 (3259.491s) [main            ]topologycomputation.cpp:786   INFO| Requesting connectivity 2 - 1
2024-03-30 14:23:51.330 (3259.491s) [main            ]topologycomputation.cpp:786   INFO| Requesting connectivity 1 - 2
2024-03-30 14:23:51.330 (3259.491s) [main            ]topologycomputation.cpp:786   INFO| Requesting connectivity 2 - 1
2024-03-30 14:23:51.335 (3259.495s) [main            ]topologycomputation.cpp:786   INFO| Requesting connectivity 1 - 2
2024-03-30 14:23:51.335 (3259.495s) [main            ]topologycomputation.cpp:786   INFO| Requesting connectivity 2 - 1
2024-03-30 14:23:51.335 (3259.495s) [main            ]topologycomputation.cpp:786   INFO| Requesting connectivity 1 - 2
2024-03-30 14:23:51.335 (3259.495s) [main            ]topologycomputation.cpp:786   INFO| Requesting connectivity 2 - 1
2024-03-30 14:23:51.339 (3259.500s) [main            ]topologycomputation.cpp:786   INFO| Requesting connectivity 1 - 2
2024-03-30 14:23:51.339 (3259.500s) [main            ]topologycomputation.cpp:786   INFO| Requesting connectivity 2 - 1
2024-03-30 14:23:51.339 (3259.500s) [main            ]topologycomputation.cpp:786   INFO| Requesting connectivity 1 - 2
2024-03-30 14:23:51.339 (3259.500s) [main            ]topologycomputation.cpp:786   INFO| Requesting connectivity 2 - 1

 ------- Pre-processing --------
    Current density: 10.0
    Length of the side where current is injected: 0.1
    Length of the side where current leaves the wire: 0.09999999999999998
    This should correspond to the current injected: 0.9999999999999998
    

2024-03-30 14:23:51.671 (3259.832s) [main            ]topologycomputation.cpp:786   INFO| Requesting connectivity 1 - 2
2024-03-30 14:23:51.671 (3259.832s) [main            ]topologycomputation.cpp:786   INFO| Requesting connectivity 2 - 1
2024-03-30 14:23:51.671 (3259.832s) [main            ]topologycomputation.cpp:786   INFO| Requesting connectivity 1 - 2
2024-03-30 14:23:51.671 (3259.832s) [main            ]topologycomputation.cpp:786   INFO| Requesting connectivity 2 - 1
2024-03-30 14:23:52.046 (3260.207s) [main            ]topologycomputation.cpp:786   INFO| Requesting connectivity 1 - 2
2024-03-30 14:23:52.046 (3260.207s) [main            ]topologycomputation.cpp:786   INFO| Requesting connectivity 2 - 1
2024-03-30 14:23:52.046 (3260.207s) [main            ]topologycomputation.cpp:786   INFO| Requesting connectivity 1 - 2
2024-03-30 14:23:52.046 (3260.207s) [main            ]topologycomputation.cpp:786   INFO| Requesting connectivity 2 - 1

------- Processing --------
    Number of interations: 1
-3.8548389568554344

2024-03-30 14:23:52.361 (3260.522s) [main            ]    SparsityPattern.cpp:389   INFO| Column ghost size increased from 0 to 0

2024-03-30 14:23:52.362 (3260.523s) [main            ]       NewtonSolver.cpp:36    INFO| Newton iteration 0: r (abs) = 0.745356 (tol = 1e-10) r (rel) = inf(tol = 1e-09)
2024-03-30 14:23:52.362 (3260.523s) [main            ]              petsc.cpp:677   INFO| PETSc Krylov solver starting to solve system.
2024-03-30 14:23:52.363 (3260.524s) [main            ]       NewtonSolver.cpp:36    INFO| Newton iteration 1: r (abs) = 4.5343e-15 (tol = 1e-10) r (rel) = 3.72362e-16(tol = 1e-09)
2024-03-30 14:23:52.363 (3260.524s) [main            ]       NewtonSolver.cpp:255   INFO| Newton solver finished in 1 iterations and 1 linear solver iterations.
2024-03-30 14:23:52.367 (3260.528s) [main            ]topologycomputation.cpp:786   INFO| Requesting connectivity 1 - 2
2024-03-30 14:23:52.367 (3260.528s) [main            ]topologycomputation.cpp:786   INFO| Requesting connectivity 2 - 1
2024-03-30 14:23:52.367 (3260.528s) [main            ]topologycomputation.cpp:786   INFO| Requesting connectivity 1 - 2
2024-03-30 14:23:52.367 (3260.528s) [main            ]topologycomputation.cpp:786   INFO| Requesting connectivity 2 - 1
2024-03-30 14:23:52.371 (3260.532s) [main            ]topologycomputation.cpp:786   INFO| Requesting connectivity 1 - 2
2024-03-30 14:23:52.371 (3260.532s) [main            ]topologycomputation.cpp:786   INFO| Requesting connectivity 2 - 1
2024-03-30 14:23:52.371 (3260.532s) [main            ]topologycomputation.cpp:786   INFO| Requesting connectivity 1 - 2
2024-03-30 14:23:52.371 (3260.532s) [main            ]topologycomputation.cpp:786   INFO| Requesting connectivity 2 - 1
2024-03-30 14:23:52.375 (3260.535s) [main            ]topologycomputation.cpp:786   INFO| Requesting connectivity 1 - 2
2024-03-30 14:23:52.375 (3260.535s) [main            ]topologycomputation.cpp:786   INFO| Requesting connectivity 2 - 1
2024-03-30 14:23:52.375 (3260.535s) [main            ]topologycomputation.cpp:786   INFO| Requesting connectivity 1 - 2
2024-03-30 14:23:52.375 (3260.535s) [main            ]topologycomputation.cpp:786   INFO| Requesting connectivity 2 - 1
2024-03-30 14:23:52.379 (3260.539s) [main            ]topologycomputation.cpp:786   INFO| Requesting connectivity 1 - 2
2024-03-30 14:23:52.379 (3260.539s) [main            ]topologycomputation.cpp:786   INFO| Requesting connectivity 2 - 1
2024-03-30 14:23:52.379 (3260.539s) [main            ]topologycomputation.cpp:786   INFO| Requesting connectivity 1 - 2
2024-03-30 14:23:52.379 (3260.539s) [main            ]topologycomputation.cpp:786   INFO| Requesting connectivity 2 - 1
2024-03-30 14:23:52.382 (3260.543s) [main            ]topologycomputation.cpp:786   INFO| Requesting connectivity 1 - 2
2024-03-30 14:23:52.382 (3260.543s) [main            ]topologycomputation.cpp:786   INFO| Requesting connectivity 2 - 1
2024-03-30 14:23:52.382 (3260.543s) [main            ]topologycomputation.cpp:786   INFO| Requesting connectivity 1 - 2
2024-03-30 14:23:52.382 (3260.543s) [main            ]topologycomputation.cpp:786   INFO| Requesting connectivity 2 - 1

With the bcs I get a similar output but with a resistance of -3.847426332583171.
So this MWE displays the same behavior: -3.8548389568554344 vs -3.847426332583171. For my purpose, I absolutely need to know which one is correct, the error would be too big otherwise.

Thanks again if you have any insight (and please consider my file.msh + original code as a sort of MWE even though it is in 2 files and could be trimmed).

This is because you didn’t make an MWE, so I had to make something that could illustrate the point, without having to spend time trying to backwards engineer what you did originally.

The solution is only unique up to a constant. Getting slightly different results doesn’t surprise me.

Why not just use a unit square for dolfinx where you mark the boundaries in the code. It massively reduces the complexity for reproducing your results.

This is up to machine precision equal.

These are to make sure mumps support null spaces.

If you are trying to get any global variable, like volume, area etc you need to accumulate the values for computing in parallel.

I also changed ksp type to pre_only. If you use iterative solvers with non exact ksp methods, it can result in more iterations.

Gotya.

I agree that the solution is defined up to a constant. However the -3.88 and -3.87 (as well as the -1.0000
) aren’t just the solution. They are the solution integrated over 2 different curves, and subtracted from one another. This subtraction yields something absolute, not arbitrary. There is definitely something odd with my quarter of an annulus mesh that isn’t occurring on the unit square mesh.

That’s a good idea, unfortunately I am not good enough with FEniCSx and I don’t have a working code at hand, where parts of the mesh generated by FEniCSx are marked. Furthermore, I do not get my problem with a unit square mesh, so there might be little point in this particular case. But for the future, to present a MWE, this would be useful to me indeed.
Here is my unfinished attempt (with some commentary in my code with what I think I should do):

from dolfinx import log
from dolfinx.fem import (Constant, Function, FunctionSpace, assemble_scalar,
                         dirichletbc, form, locate_dofs_geometrical, VectorFunctionSpace,
                         locate_dofs_topological, Expression)
from dolfinx.fem.petsc import NonlinearProblem
from dolfinx.io import XDMFFile, gmshio
from dolfinx.nls.petsc import NewtonSolver
from mpi4py import MPI
from petsc4py import PETSc
from petsc4py.PETSc import ScalarType
from ufl import (FiniteElement, Measure, MixedElement, SpatialCoordinate,
                 TestFunction, TrialFunction, as_tensor, dot, dx, grad, inner,
                 inv, split, derivative, FacetNormal, div)
from mpi4py import MPI
import gmsh
from dolfinx.io import gmshio, VTXWriter, XDMFFile


def on_left_boundary(x):
    return np.isclose(x[0], 0)

def on_right_boundary(x):
    return np.isclose(x[0], 1)


def resistance(meshfile):
    # mesh, cell_markers, facet_markers = gmshio.read_from_msh(meshfile, MPI.COMM_WORLD, gdim=2)
    
    mesh = create_unit_square(MPI.COMM_WORLD, 20, 20)
    
    boundary_left_dofs = fem.locate_dofs_geometrical(V, on_left_boundary)
    boundary_right_dofs = fem.locate_dofs_geometrical(V, on_right_boundary)    
    
    # Here I want to mark the left boundaries with a "12".
    # And the right boundaries with a "13", so the rest of the code should work.
    
    the_current = 1.0

    current_curves=(12,13)
    voltages_curves=(12,13)
    inj_current_curve, out_current_curve = current_curves
    reading_voltage_curve_0, reading_voltage_curve_1 = voltages_curves

    # Define FE function space
    deg = 1
    el = FiniteElement("CG", mesh.ufl_cell(), deg)
    V = FunctionSpace(mesh, el)    

    v = TestFunction(V)
    volt = Function(V)
  
    rho = 1
    sigma = 1.0 / rho

    # Define the boundary conditions
    left_facets = facet_markers.find(inj_current_curve)
    right_facets = facet_markers.find(out_current_curve)
    left_dofs = locate_dofs_topological(V, mesh.topology.dim-1, left_facets)
    V_current_contact_out = 0
    bcs = [dirichletbc(ScalarType(V_current_contact_out), left_dofs, V)]
    
    
    x = SpatialCoordinate(mesh)
    dx = Measure("dx", domain=mesh, subdomain_data=cell_markers)
    ds = Measure("ds", domain=mesh, subdomain_data=facet_markers)

    J = the_current / assemble_scalar(form(1 * ds(inj_current_curve, domain=mesh)))
    #J = the_current / mesh.comm.allreduce(assemble_scalar(form(1 * ds)), op=MPI.SUM)

    # Weak form.
    J_vector = -sigma * grad(volt)

    F_V = -dot(grad(v), sigma * grad(volt))*dx - v * J * ds(out_current_curve) + v * J * ds(inj_current_curve) 

    weak_form = F_V

    # Solve the PDE.
    from dolfinx.fem.petsc import NonlinearProblem
    from dolfinx.nls.petsc import NewtonSolver


    print(f''' ------- Pre-processing --------
    Current density: {J}
    Length of the side where current is injected: {assemble_scalar(form(1 * ds(inj_current_curve, domain=mesh)))}
    Length of the side where current leaves the wire: {assemble_scalar(form(1 * ds(out_current_curve, domain=mesh)))}
    This should correspond to the current injected: {assemble_scalar(form(J*ds(out_current_curve)))}
    ''')


    # Solve the PDE.
    problem = NonlinearProblem(weak_form, volt, bcs=bcs)
    solver = NewtonSolver(MPI.COMM_WORLD, problem)

    ksp = solver.krylov_solver
    opts = PETSc.Options()
    option_prefix = ksp.getOptionsPrefix()
    opts[f"{option_prefix}ksp_type"] = "preonly"
    opts[f"{option_prefix}pc_type"] = "lu"
    opts[f"{option_prefix}pc_factor_mat_solver_type"] = "mumps"
    # Option to support solving a singular matrix
    #opts[f"{option_prefix}mat_mumps_icntl_24"] = 1
    #opts[f"{option_prefix}mat_mumps_icntl_25"] = 0
    ksp.setFromOptions()
    nullspace = PETSc.NullSpace().create(constant=True)  # type: ignore
    PETSc.Mat.setNearNullSpace(solver.A, nullspace)
    log.set_log_level(log.LogLevel.INFO)
    n, converged = solver.solve(volt)
    assert (converged)
    print(f'''------- Processing --------
    Number of interations: {n:d}''')



    avg_voltage_curve_integral_0 = assemble_scalar(form(volt * ds(reading_voltage_curve_0, domain=mesh))) / assemble_scalar(form(1 * ds(reading_voltage_curve_0, domain=mesh)))
    avg_voltage_curve_integral_1 = assemble_scalar(form(volt * ds(reading_voltage_curve_1, domain=mesh))) / assemble_scalar(form(1 * ds(reading_voltage_curve_1, domain=mesh)))
    voltage_diff = avg_voltage_curve_integral_1 - avg_voltage_curve_integral_0
    resistance = voltage_diff / assemble_scalar(form(J*ds(out_current_curve)))

    return resistance


res = resistance("meshes/quarter_annulus.msh")
print(res)

I agree, this is very satisfying indeed, and it should be so. The good thing is that in this simple mesh, I can compare with analytical result, and they match what they should. The bad thing is that the test is not conclusive in telling me which one between -3.87 and -3.88 I can trust, if any, for my quarter of an annulus mesh. So switching to this simpler mesh removed my problem without giving me the information I was looking for.

Thanks a lot for the info!

Again, thanks for the valuable information.

Alright, good to know. For this specific problem, I won’t need more than what I do. If only I could elucidate why I get -3.87 and -3.88, which one I can trust (if any), then I would be done with this part of my problem.

In the grand scheme of things, I plan to use this code to compute the electrical resistance of various different shapes in an automated way. But so far I get two different values for a quarter of an annulus, depending on whether I specify Dirichlet b.c. or not. I’d rather not have to specify these b.c.s, because I have a more complex (mixed element) problem that is similar to this one, but in this more complex problem I cannot set any surface or curve to a fixed potential (I could however set a node to any value). So I really like the idea to use a null space and let the solver deal with the problem itself.

I have attempted to find an analytical solution to the quarter of an annulus. In the process I think I may have found a culprit of why I get two different answers, whether or not I specify the Dirichlet boundary conditions, but I am not 100% sure.

So the eq. to solve is \nabla \cdot \vec J = 0 where \vec J is related to the voltage through \vec J = - \sigma \nabla V. I have Neumann boundary conditions, the current density integrated over the curve at \theta = 0 and \theta = \pi/4 must equal the current I which is assumed to be known.

This makes the PDE to solve as \nabla ^2 V =0. Now here’s the thing. If I assume that V only depends on \theta, then I get a computed voltage between the 2 curves as \frac{I\rho}{a-b}\frac{\pi}{4} which gives me about 7.8539 for the values of my problem, way too off, I probably made a mistake in my calculations, but that’s the idea.
But I realize something. If V depends on r (I believe it does) even with 2 Dirichlet boundary conditions with constant voltage on each curve, then it is a slightly different problem. Therefore, I start to believe that when I use FEniCSx and I specify a Dirichlet boundary condition on one side, it is a slightly different problem than when I let FEniCSx solve the problem dealing with the nullspace. I am becoming convinced of this.

Therefore in my case, I should trust the result where I do not enforce the Dirichlet boundary condition, because in the end I will not apply a uniform voltage on neither end. I solve the problem where the total current that passes through each curve must be worth “I”, which, for some geometries, might yield an almost constant or even constant voltage. But in the annulus case, there’s a slight deviation I believe. This explains the -3.88 vs -3.87.

Now I’m a bit sad I didn’t get the analytical solution correctly, but it doesn’t matter much at this point. I’ll proceed on. Big thank on you dokken, and anyone else reading this.

Edit: In fact, not only does this argument explains why there’s a difference in the case of the annulus, but it also explains why there’s no difference in the case of the unit square. In that case it turns out that the 2 problems are exactly the same due to the symmetry. Wow. I am very happy
 I think I have understood this mystery.

Hello again @dokken,
I have played with several different meshes, and I am now concerned. It looks like the nullspace has no effect whatsoever, and in some meshes, this yields completely off results, despite a quick convergence in 1 step. In those cases, there is a big difference between the result of using a Dirichlet boundary condition and 2 Neumann + nullspace.

I am still unable to provide a 1 code MWE using fenicsx only (plus, since it’s related to the mesh, it may take a very long time and efforts just to get the MWE).

The problematic mesh can be generated with

SetFactory("OpenCASCADE");
Mesh.CharacteristicLengthMax = 0.010;
Mesh.CharacteristicLengthMin = 0.001;
Rectangle(1) = {0, 0, 0, 0.03, 0.3, 0};
Rectangle(2) = {0, 0.2, 0, 0.9, 0.1, 0};
BooleanFragments{ Surface{2}; Surface{1}; Delete; }{ }
BooleanFragments{ Surface{1}; Surface{3}; Delete; }{ }
Physical Surface("material", 11) = {2, 1, 3};
Physical Curve("bottom_left", 12) = {10};
Physical Curve("top_right", 13) = {6};

which is just 2 rectangles of different sizes meeting each other, the symmetry is “broken” compared to similar sized rectangles.

My FEniCSx code:

import numpy as np
from dolfinx import log
from dolfinx.fem import (Constant, Function, functionspace, FunctionSpace, assemble_scalar,
                         dirichletbc, form, locate_dofs_geometrical, VectorFunctionSpace,
                         locate_dofs_topological, Expression, create_matrix)
from dolfinx.fem.petsc import NonlinearProblem
from dolfinx.nls.petsc import NewtonSolver
from mpi4py import MPI
from petsc4py import PETSc
from petsc4py.PETSc import ScalarType
from ufl import (FiniteElement, Measure, MixedElement, SpatialCoordinate,
                 TestFunction, TrialFunction, as_tensor, dot, dx, grad, inner,
                 inv, split, derivative, FacetNormal, div)
import gmsh
from dolfinx.io import gmshio, VTXWriter

mesh, cell_markers, facet_markers = gmshio.read_from_msh("meshes/rectangles.msh", MPI.COMM_WORLD, gdim=2)

def resistance(meshfile):
    the_current = 1.0
    current_curves=(12,13)
    voltages_curves=(12,13)    
    
    inj_current_curve, out_current_curve = current_curves
    reading_voltage_curve_0, reading_voltage_curve_1 = voltages_curves

    # Define FE function space
    deg = 3
    el = FiniteElement("CG", mesh.ufl_cell(), deg)
    V = FunctionSpace(mesh, el)    

    v = TestFunction(V)
    volt = Function(V)
  
    rho = 1
    sigma = 1.0 / rho

    # Define the boundary conditions
    left_facets = facet_markers.find(inj_current_curve)
    right_facets = facet_markers.find(out_current_curve)
    left_dofs = locate_dofs_topological(V, mesh.topology.dim-1, left_facets)
    V_current_contact_out = 0
    bcs = [dirichletbc(ScalarType(V_current_contact_out), left_dofs, V)]
    
    x = SpatialCoordinate(mesh)
    dx = Measure("dx", domain=mesh, subdomain_data=cell_markers)
    ds = Measure("ds", domain=mesh, subdomain_data=facet_markers)
    J = the_current / assemble_scalar(form(1 * ds(inj_current_curve, domain=mesh)))

    # Weak form.
    J_vector = -sigma * grad(volt)
    F_V = dot(grad(v), J_vector)*dx + v * J * ds(out_current_curve) - v * J * ds(inj_current_curve) 
    weak_form = F_V

    # Solve the PDE.
    problem = NonlinearProblem(weak_form, volt)
    solver = NewtonSolver(MPI.COMM_WORLD, problem)
    solver.convergence_criterion = "residual"#"incremental"
    solver.rtol = 1e-9
    #solver.atol = 1e-9
    ksp = solver.krylov_solver
    opts = PETSc.Options()
    option_prefix = ksp.getOptionsPrefix()
    opts[f"{option_prefix}ksp_type"] = "preonly"
    opts[f"{option_prefix}pc_type"] = "lu"
    opts[f"{option_prefix}pc_factor_mat_solver_type"] = "mumps"
    # Option to support solving a singular matrix
    opts[f"{option_prefix}mat_mumps_icntl_24"] = 1
    opts[f"{option_prefix}mat_mumps_icntl_25"] = 0
    ksp.setFromOptions()
    # Specify the null space, because voltage is defined up to a constant and we don't use any Dirichlet boundary condition to fix the voltage.
    nullspace = PETSc.NullSpace().create(constant=True)  # type: ignore
    PETSc.Mat.setNearNullSpace(solver.A, nullspace)
    log.set_log_level(log.LogLevel.WARNING)
    n, converged = solver.solve(volt)
    assert (converged)
    print(f'''------- Processing --------
    Number of interations: {n:d}''')



    avg_voltage_curve_integral_0 = assemble_scalar(form(volt * ds(reading_voltage_curve_0, domain=mesh))) / assemble_scalar(form(1 * ds(reading_voltage_curve_0, domain=mesh)))
    avg_voltage_curve_integral_1 = assemble_scalar(form(volt * ds(reading_voltage_curve_1, domain=mesh))) / assemble_scalar(form(1 * ds(reading_voltage_curve_1, domain=mesh)))
    voltage_diff = avg_voltage_curve_integral_1 - avg_voltage_curve_integral_0
    resistance = voltage_diff / assemble_scalar(form(J*ds(out_current_curve)))


    n = FacetNormal(mesh)
    down_current_flux = form(dot(J_vector, n)*ds(inj_current_curve))
    I_out = assemble_scalar(down_current_flux)
    up_current_flux = form(dot(J_vector, n)*ds(out_current_curve))
    I_in = assemble_scalar(up_current_flux)
    # Positive means outwards the curve/surface.
    print(f'I_out: {I_out}')
    print(f'I_in: {I_in}')
    print(f'div(Je) should be 0: {assemble_scalar(form(div(J_vector) * dx(domain=mesh)))}')
    return resistance

res = resistance("meshes/mesh2.msh")
print(f'resistance: {res}')

This returns:

 ------- Pre-processing --------
    Current density: 33.333333333333336
    Length of the side where current is injected: 0.03
    Length of the side where current leaves the wire: 0.09999999999999998
    This should correspond to the current injected: 3.3333333333333335
    
------- Processing --------
    Number of interations: 1
I_out: 0.49841721000964123
I_in: -4.631054379212907
div(Je) should be 0: -317.7210306036274

As you can see, the in and out currents are wrong, because they should match. And the weak form corresponds to the equation \nabla \cdot \vec J=0, and here the value of the divergence is -317, which is totally off.

All this is “fixed” if I use a Dirichlet b.c. by adding the parameter , bcs=bcs in the problem line.
Indeed, this yields:

 ------- Pre-processing --------
    Current density: 33.333333333333336
    Length of the side where current is injected: 0.03
    Length of the side where current leaves the wire: 0.09999999999999998
    This should correspond to the current injected: 3.3333333333333335
    
------- Processing --------
    Number of interations: 1
I_out: 3.333333333328877
I_in: -3.3333333333376225
div(Je) should be 0: -6.975175800090215e-10

which is alright: both currents are correct, and we do find that the divergence of \vec J vanishes, as it should, since it’s the weak form equation.
I really need to fix this issue, since I don’t use Dirichlet b.c. in my more complex problem. If you have an idea what’s wrong with my nullspace approach, feel free to share.

According to chatGPT this line should be changed to PETSc.Mat.setNullSpace(solver.A, nullspace). I will be able to test within 12 hours from now. Do you think this is a good suggestion? I have read the petsc doc but I don’t really know.

Yes, it should be a good suggestion. If I remember correctly, setNullSpace should be used when solving with direct solvers (like mumps, that you are using), while setNearNullSpace when solving with iterative ones.

2 Likes

Unfortunately the suggestion makes Newton method fail to converge, it doesn’t diverge either, but stays completely stagnant even after 1000 iterations.

The code:

    # Solve the PDE.
    problem = NonlinearProblem(weak_form, volt)
    solver = NewtonSolver(MPI.COMM_WORLD, problem)


    solver.convergence_criterion = "residual"#"incremental"
    solver.rtol = 1e-9
    solver.atol = 1e-9
    solver.report = True
    solver.max_it = 100
    
    
    ksp = solver.krylov_solver
    opts = PETSc.Options()
    option_prefix = ksp.getOptionsPrefix()
    opts[f"{option_prefix}ksp_type"] = "preonly"
    opts[f"{option_prefix}pc_type"] = "lu"
    opts[f"{option_prefix}pc_factor_mat_solver_type"] = "mumps"
    # Option to support solving a singular matrix
    opts[f"{option_prefix}mat_mumps_icntl_24"] = 1
    opts[f"{option_prefix}mat_mumps_icntl_25"] = 0
    ksp.setFromOptions()
    # Specify the null space, because voltage is defined up to a constant and we don't use any Dirichlet boundary condition to fix the voltage.
    nullspace = PETSc.NullSpace().create(constant=True)  # type: ignore
    #PETSc.Mat.setNearNullSpace(solver.A, nullspace)
    PETSc.Mat.setNullSpace(solver.A, nullspace)
    log.set_log_level(log.LogLevel.INFO)
    n, converged = solver.solve(volt)
    assert (converged)
    print(f'''------- Processing --------
    Number of interations: {n:d}''')

yields:

 ------- Pre-processing --------
    Current density: 33.333333333333336
    Length of the side where current is injected: 0.03
    Length of the side where current leaves the wire: 0.09999999999999998
    This should correspond to the current injected: 3.3333333333333335
    
2024-04-16 20:23:29.172 (   0.286s) [main            ]       NewtonSolver.cpp:36    INFO| Newton iteration 0: r (abs) = 0.73388 (tol = 1e-09) r (rel) = inf(tol = 1e-09)
2024-04-16 20:23:29.178 (   0.292s) [main            ]              petsc.cpp:698   INFO| PETSc Krylov solver starting to solve system.
2024-04-16 20:23:29.235 (   0.349s) [main            ]         TimeLogger.cpp:28    INFO| Elapsed wall, usr, sys time: 0.057305, 0.050000, 0.000000 (PETSc Krylov solver)
2024-04-16 20:23:29.236 (   0.350s) [main            ]       NewtonSolver.cpp:36    INFO| Newton iteration 1: r (abs) = 17.2902 (tol = 1e-09) r (rel) = 0.0327984(tol = 1e-09)
2024-04-16 20:23:29.241 (   0.355s) [main            ]              petsc.cpp:698   INFO| PETSc Krylov solver starting to solve system.
2024-04-16 20:23:29.259 (   0.373s) [main            ]         TimeLogger.cpp:28    INFO| Elapsed wall, usr, sys time: 0.018219, 0.020000, 0.000000 (PETSc Krylov solver)
2024-04-16 20:23:29.260 (   0.374s) [main            ]       NewtonSolver.cpp:36    INFO| Newton iteration 2: r (abs) = 17.2909 (tol = 1e-09) r (rel) = 0.0327996(tol = 1e-09)
2024-04-16 20:23:29.265 (   0.379s) [main            ]              petsc.cpp:698   INFO| PETSc Krylov solver starting to solve system.
2024-04-16 20:23:29.282 (   0.396s) [main            ]         TimeLogger.cpp:28    INFO| Elapsed wall, usr, sys time: 0.017225, 0.020000, 0.000000 (PETSc Krylov solver)
2024-04-16 20:23:29.283 (   0.397s) [main            ]       NewtonSolver.cpp:36    INFO| Newton iteration 3: r (abs) = 17.2886 (tol = 1e-09) r (rel) = 0.0327953(tol = 1e-09)
2024-04-16 20:23:29.287 (   0.401s) [main            ]              petsc.cpp:698   INFO| PETSc Krylov solver starting to solve system.
2024-04-16 20:23:29.305 (   0.419s) [main            ]         TimeLogger.cpp:28    INFO| Elapsed wall, usr, sys time: 0.017381, 0.020000, 0.000000 (PETSc Krylov solver)
2024-04-16 20:23:29.306 (   0.420s) [main            ]       NewtonSolver.cpp:36    INFO| Newton iteration 4: r (abs) = 17.2957 (tol = 1e-09) r (rel) = 0.0328088(tol = 1e-09)
2024-04-16 20:23:29.310 (   0.424s) [main            ]              petsc.cpp:698   INFO| PETSc Krylov solver starting to solve system.
2024-04-16 20:23:29.328 (   0.442s) [main            ]         TimeLogger.cpp:28    INFO| Elapsed wall, usr, sys time: 0.017277, 0.010000, 0.000000 (PETSc Krylov solver)
2024-04-16 20:23:29.329 (   0.443s) [main            ]       NewtonSolver.cpp:36    INFO| Newton iteration 5: r (abs) = 17.3052 (tol = 1e-09) r (rel) = 0.0328268(tol = 1e-09)
2024-04-16 20:23:29.333 (   0.447s) [main            ]              petsc.cpp:698   INFO| PETSc Krylov solver starting to solve system.
2024-04-16 20:23:29.350 (   0.464s) [main            ]         TimeLogger.cpp:28    INFO| Elapsed wall, usr, sys time: 0.016882, 0.020000, 0.000000 (PETSc Krylov solver)
2024-04-16 20:23:29.351 (   0.465s) [main            ]       NewtonSolver.cpp:36    INFO| Newton iteration 6: r (abs) = 17.3244 (tol = 1e-09) r (rel) = 0.0328632(tol = 1e-09)
2024-04-16 20:23:29.356 (   0.470s) [main            ]              petsc.cpp:698   INFO| PETSc Krylov solver starting to solve system.
2024-04-16 20:23:29.373 (   0.487s) [main            ]         TimeLogger.cpp:28    INFO| Elapsed wall, usr, sys time: 0.017558, 0.020000, 0.000000 (PETSc Krylov solver)
2024-04-16 20:23:29.374 (   0.488s) [main            ]       NewtonSolver.cpp:36    INFO| Newton iteration 7: r (abs) = 17.3025 (tol = 1e-09) r (rel) = 0.0328217(tol = 1e-09)
2024-04-16 20:23:29.379 (   0.493s) [main            ]              petsc.cpp:698   INFO| PETSc Krylov solver starting to solve system.
2024-04-16 20:23:29.396 (   0.510s) [main            ]         TimeLogger.cpp:28    INFO| Elapsed wall, usr, sys time: 0.017009, 0.010000, 0.000000 (PETSc Krylov solver)
2024-04-16 20:23:29.397 (   0.511s) [main            ]       NewtonSolver.cpp:36    INFO| Newton iteration 8: r (abs) = 17.2915 (tol = 1e-09) r (rel) = 0.0328008(tol = 1e-09)
2024-04-16 20:23:29.401 (   0.515s) [main            ]              petsc.cpp:698   INFO| PETSc Krylov solver starting to solve system.
2024-04-16 20:23:29.418 (   0.532s) [main            ]         TimeLogger.cpp:28    INFO| Elapsed wall, usr, sys time: 0.016821, 0.010000, 0.000000 (PETSc Krylov solver)
2024-04-16 20:23:29.419 (   0.533s) [main            ]       NewtonSolver.cpp:36    INFO| Newton iteration 9: r (abs) = 17.3172 (tol = 1e-09) r (rel) = 0.0328496(tol = 1e-09)
2024-04-16 20:23:29.423 (   0.537s) [main            ]              petsc.cpp:698   INFO| PETSc Krylov solver starting to solve system.
2024-04-16 20:23:29.441 (   0.555s) [main            ]         TimeLogger.cpp:28    INFO| Elapsed wall, usr, sys time: 0.017705, 0.020000, 0.000000 (PETSc Krylov solver)
2024-04-16 20:23:29.442 (   0.556s) [main            ]       NewtonSolver.cpp:36    INFO| Newton iteration 10: r (abs) = 17.3135 (tol = 1e-09) r (rel) = 0.0328425(tol = 1e-09)
2024-04-16 20:23:29.446 (   0.561s) [main            ]              petsc.cpp:698   INFO| PETSc Krylov solver starting to solve system.
2024-04-16 20:23:29.463 (   0.578s) [main            ]         TimeLogger.cpp:28    INFO| Elapsed wall, usr, sys time: 0.017029, 0.020000, 0.000000 (PETSc Krylov solver)
2024-04-16 20:23:29.464 (   0.578s) [main            ]       NewtonSolver.cpp:36    INFO| Newton iteration 11: r (abs) = 17.2873 (tol = 1e-09) r (rel) = 0.0327929(tol = 1e-09)
2024-04-16 20:23:29.469 (   0.583s) [main            ]              petsc.cpp:698   INFO| PETSc Krylov solver starting to solve system.
2024-04-16 20:23:29.486 (   0.600s) [main            ]         TimeLogger.cpp:28    INFO| Elapsed wall, usr, sys time: 0.017065, 0.010000, 0.000000 (PETSc Krylov solver)
2024-04-16 20:23:29.487 (   0.601s) [main            ]       NewtonSolver.cpp:36    INFO| Newton iteration 12: r (abs) = 17.2987 (tol = 1e-09) r (rel) = 0.0328145(tol = 1e-09)
2024-04-16 20:23:29.492 (   0.606s) [main            ]              petsc.cpp:698   INFO| PETSc Krylov solver starting to solve system.
2024-04-16 20:23:29.509 (   0.623s) [main            ]         TimeLogger.cpp:28    INFO| Elapsed wall, usr, sys time: 0.017401, 0.020000, 0.000000 (PETSc Krylov solver)
2024-04-16 20:23:29.510 (   0.624s) [main            ]       NewtonSolver.cpp:36    INFO| Newton iteration 13: r (abs) = 17.3169 (tol = 1e-09) r (rel) = 0.032849(tol = 1e-09)
2024-04-16 20:23:29.514 (   0.628s) [main            ]              petsc.cpp:698   INFO| PETSc Krylov solver starting to solve system.
2024-04-16 20:23:29.531 (   0.645s) [main            ]         TimeLogger.cpp:28    INFO| Elapsed wall, usr, sys time: 0.016966, 0.020000, 0.000000 (PETSc Krylov solver)
2024-04-16 20:23:29.532 (   0.646s) [main            ]       NewtonSolver.cpp:36    INFO| Newton iteration 14: r (abs) = 17.2843 (tol = 1e-09) r (rel) = 0.0327872(tol = 1e-09)
2024-04-16 20:23:29.537 (   0.651s) [main            ]              petsc.cpp:698   INFO| PETSc Krylov solver starting to solve system.
2024-04-16 20:23:29.555 (   0.669s) [main            ]         TimeLogger.cpp:28    INFO| Elapsed wall, usr, sys time: 0.018162, 0.020000, 0.000000 (PETSc Krylov solver)
2024-04-16 20:23:29.556 (   0.670s) [main            ]       NewtonSolver.cpp:36    INFO| Newton iteration 15: r (abs) = 17.2951 (tol = 1e-09) r (rel) = 0.0328076(tol = 1e-09)
2024-04-16 20:23:29.561 (   0.675s) [main            ]              petsc.cpp:698   INFO| PETSc Krylov solver starting to solve system.
2024-04-16 20:23:29.578 (   0.692s) [main            ]         TimeLogger.cpp:28    INFO| Elapsed wall, usr, sys time: 0.017008, 0.010000, 0.000000 (PETSc Krylov solver)
2024-04-16 20:23:29.579 (   0.693s) [main            ]       NewtonSolver.cpp:36    INFO| Newton iteration 16: r (abs) = 17.2999 (tol = 1e-09) r (rel) = 0.0328168(tol = 1e-09)
2024-04-16 20:23:29.583 (   0.698s) [main            ]              petsc.cpp:698   INFO| PETSc Krylov solver starting to solve system.
2024-04-16 20:23:29.600 (   0.715s) [main            ]         TimeLogger.cpp:28    INFO| Elapsed wall, usr, sys time: 0.016952, 0.020000, 0.000000 (PETSc Krylov solver)
2024-04-16 20:23:29.601 (   0.715s) [main            ]       NewtonSolver.cpp:36    INFO| Newton iteration 17: r (abs) = 17.2815 (tol = 1e-09) r (rel) = 0.0327819(tol = 1e-09)
2024-04-16 20:23:29.606 (   0.720s) [main            ]              petsc.cpp:698   INFO| PETSc Krylov solver starting to solve system.
2024-04-16 20:23:29.624 (   0.738s) [main            ]         TimeLogger.cpp:28    INFO| Elapsed wall, usr, sys time: 0.018324, 0.020000, 0.000000 (PETSc Krylov solver)
2024-04-16 20:23:29.625 (   0.739s) [main            ]       NewtonSolver.cpp:36    INFO| Newton iteration 18: r (abs) = 17.2852 (tol = 1e-09) r (rel) = 0.0327889(tol = 1e-09)
2024-04-16 20:23:29.629 (   0.744s) [main            ]              petsc.cpp:698   INFO| PETSc Krylov solver starting to solve system.
2024-04-16 20:23:29.647 (   0.761s) [main            ]         TimeLogger.cpp:28    INFO| Elapsed wall, usr, sys time: 0.017161, 0.010000, 0.000000 (PETSc Krylov solver)
2024-04-16 20:23:29.647 (   0.761s) [main            ]       NewtonSolver.cpp:36    INFO| Newton iteration 
2024-04-16 20:23:31.488 (   2.602s) [main            ]              petsc.cpp:698   INFO| PETSc Krylov solver starting to solve system.
2024-04-16 20:23:31.507 (   2.621s) [main            ]         TimeLogger.cpp:28    INFO| Elapsed wall, usr, sys time: 0.019049, 0.020000, 0.000000 (PETSc Krylov solver)
2024-04-16 20:23:31.508 (   2.622s) [main            ]       NewtonSolver.cpp:36    INFO| Newton iteration 99: r (abs) = 17.2758 (tol = 1e-09) r (rel) = 0.0327711(tol = 1e-09)
2024-04-16 20:23:31.513 (   2.627s) [main            ]              petsc.cpp:698   INFO| PETSc Krylov solver starting to solve system.
2024-04-16 20:23:31.531 (   2.645s) [main            ]         TimeLogger.cpp:28    INFO| Elapsed wall, usr, sys time: 0.018367, 0.020000, 0.000000 (PETSc Krylov solver)
2024-04-16 20:23:31.532 (   2.646s) [main            ]       NewtonSolver.cpp:36    INFO| Newton iteration 100: r (abs) = 17.3185 (tol = 1e-09) r (rel) = 0.0328521(tol = 1e-09)
Traceback (most recent call last):
  File "/root/shared/TE_simulation/Bridgman_Nye_example.py", line 468, in <module>
    res = resistance("meshes/rectangles.msh")
  File "/root/shared/TE_simulation/Bridgman_Nye_example.py", line 113, in resistance
    n, converged = solver.solve(volt)
  File "/usr/local/dolfinx-real/lib/python3.10/dist-packages/dolfinx/nls/petsc.py", line 46, in solve
    n, converged = super().solve(u.vector)
RuntimeError: Newton solver did not converge because maximum number of iterations reached
2024-04-16 20:23:31.575 (   2.690s) [main            ]             loguru.cpp:526   INFO| atexit

(I removed some lines to be able to post here.
Is there something else I can do to get the 2 Neumann b.c. working?

ChatGTP is suggesting me to use a custom Newton solver class in order to override the ‘‘update_solution’’ method. The goal is to remove the null space at each iteration.

But.this sounds strange to me because convergence should be reach,within a single step, so I highly doubt ChatGPT is leading me on the right track.

Any help immensily appreciated.

Edit: I am also puzzled as to why my code converges to wrong values if I don’t specify the null space. I would expect to not converge instead. This really troubles me.

Please note that if you use a direct solver and set the nullspace, you should remove it from the rhs,see

1 Like

Thanks a lot dokken for.this information. I’ll try toadapt it to the nonlinearproblem as the weak form has a slightly different notation with a single ‘‘F’’ as opposed to ‘‘a’’ and ‘‘L’’ or Ax equals b.

Unfortunately, this didn’t improve things. The relevant part of my code:

 # Solve the PDE.
    problem = NonlinearProblem(weak_form, volt)
    solver = NewtonSolver(MPI.COMM_WORLD, problem)

    solver.convergence_criterion = "residual"#"incremental"
    solver.rtol = 1e-9
    solver.atol = 1e-9
    solver.report = True
    solver.max_it = 100
    
    ksp = solver.krylov_solver
    opts = PETSc.Options()
    option_prefix = ksp.getOptionsPrefix()
    opts[f"{option_prefix}ksp_type"] = "preonly"
    opts[f"{option_prefix}pc_type"] = "lu"
    opts[f"{option_prefix}pc_factor_mat_solver_type"] = "mumps"
    # Option to support solving a singular matrix
    opts[f"{option_prefix}mat_mumps_icntl_24"] = 1
    opts[f"{option_prefix}mat_mumps_icntl_25"] = 0
    ksp.setFromOptions()
    # Specify the null space, because voltage is defined up to a constant and we don't use any Dirichlet boundary condition to fix the voltage.
    nullspace = PETSc.NullSpace().create(constant=True)  # type: ignore
    PETSc.Mat.setNullSpace(solver.A, nullspace)
    nullspace.remove(solver.b)
    log.set_log_level(log.LogLevel.INFO)
    n, converged = solver.solve(volt)
    assert (converged)
    print(f'''------- Processing --------
    Number of interations: {n:d}''')

Yields (I truncate the middle):

 ------- Pre-processing --------
    Current density: 33.333333333333336
    Length of the side where current is injected: 0.03
    Length of the side where current leaves the wire: 0.09999999999999998
    This should correspond to the current injected: 3.3333333333333335
    
2024-04-17 19:29:02.691 (   0.302s) [main            ]       NewtonSolver.cpp:36    INFO| Newton iteration 0: r (abs) = 0.73388 (tol = 1e-09) r (rel) = inf(tol = 1e-09)
2024-04-17 19:29:02.696 (   0.308s) [main            ]              petsc.cpp:698   INFO| PETSc Krylov solver starting to solve system.
2024-04-17 19:29:02.752 (   0.363s) [main            ]         TimeLogger.cpp:28    INFO| Elapsed wall, usr, sys time: 0.055797, 0.050000, 0.000000 (PETSc Krylov solver)
2024-04-17 19:29:02.753 (   0.364s) [main            ]       NewtonSolver.cpp:36    INFO| Newton iteration 1: r (abs) = 17.2902 (tol = 1e-09) r (rel) = 0.0327984(tol = 1e-09)
2024-04-17 19:29:02.758 (   0.369s) [main            ]              petsc.cpp:698   INFO| PETSc Krylov solver starting to solve system.
2024-04-17 19:29:02.776 (   0.387s) [main            ]         TimeLogger.cpp:28    INFO| Elapsed wall, usr, sys time: 0.018072, 0.020000, 0.000000 (PETSc Krylov solver)
2024-04-17 19:29:02.777 (   0.388s) [main            ]       NewtonSolver.cpp:36    INFO| Newton iteration 2: r (abs) = 17.2909 (tol = 1e-09) r (rel) = 0.0327996(tol = 1e-09)
2024-04-17 19:29:02.782 (   0.393s) [main            ]              petsc.cpp:698   INFO| PETSc Krylov solver starting to solve system.
(...)
2024-04-17 19:29:05.015 (   2.626s) [main            ]       NewtonSolver.cpp:36    INFO| Newton iteration 98: r (abs) = 17.3057 (tol = 1e-09) r (rel) = 0.0328278(tol = 1e-09)
2024-04-17 19:29:05.019 (   2.631s) [main            ]              petsc.cpp:698   INFO| PETSc Krylov solver starting to solve system.
2024-04-17 19:29:05.037 (   2.648s) [main            ]         TimeLogger.cpp:28    INFO| Elapsed wall, usr, sys time: 0.017794, 0.010000, 0.000000 (PETSc Krylov solver)
2024-04-17 19:29:05.038 (   2.649s) [main            ]       NewtonSolver.cpp:36    INFO| Newton iteration 99: r (abs) = 17.2758 (tol = 1e-09) r (rel) = 0.0327711(tol = 1e-09)
2024-04-17 19:29:05.043 (   2.654s) [main            ]              petsc.cpp:698   INFO| PETSc Krylov solver starting to solve system.
2024-04-17 19:29:05.060 (   2.671s) [main            ]         TimeLogger.cpp:28    INFO| Elapsed wall, usr, sys time: 0.017493, 0.020000, 0.000000 (PETSc Krylov solver)
2024-04-17 19:29:05.061 (   2.672s) [main            ]       NewtonSolver.cpp:36    INFO| Newton iteration 100: r (abs) = 17.3185 (tol = 1e-09) r (rel) = 0.0328521(tol = 1e-09)
Traceback (most recent call last):
  File "/root/shared/TE_simulation/Bridgman_Nye_example.py", line 470, in <module>
    res = resistance("meshes/rectangles.msh")
  File "/root/shared/TE_simulation/Bridgman_Nye_example.py", line 115, in resistance
    n, converged = solver.solve(volt)
  File "/usr/local/dolfinx-real/lib/python3.10/dist-packages/dolfinx/nls/petsc.py", line 46, in solve
    n, converged = super().solve(u.vector)
RuntimeError: Newton solver did not converge because maximum number of iterations reached
2024-04-17 19:29:05.097 (   2.708s) [main            ]             loguru.cpp:526   INFO| atexit

I must still be doing something wrong. I haven’t found an example where Nonlinearproblem is used in conjunction with 2 Neumann b.c.

Edit: Based on your link, I have added assert nullspace.test(solver.A).
Surprisingly, my code now returns:

 ------- Pre-processing --------
    Current density: 33.333333333333336
    Length of the side where current is injected: 0.03
    Length of the side where current leaves the wire: 0.09999999999999998
    This should correspond to the current injected: 3.3333333333333335
    
Traceback (most recent call last):
  File "/root/shared/TE_simulation/Bridgman_Nye_example.py", line 471, in <module>
    res = resistance("meshes/rectangles.msh")
  File "/root/shared/TE_simulation/Bridgman_Nye_example.py", line 113, in resistance
    assert nullspace.test(solver.A)
  File "petsc4py/PETSc/Mat.pyx", line 5751, in petsc4py.PETSc.NullSpace.test
  File "petsc4py/PETSc/PETSc.pyx", line 79, in petsc4py.PETSc.CHKERR
petsc4py.PETSc.Error: error code 73
[0] MatNullSpaceTest() at /usr/local/petsc/src/mat/interface/matnull.c:411
[0] MatMult() at /usr/local/petsc/src/mat/interface/matrix.c:2595
[0] Object is in wrong state
[0] Not for unassembled matrix
WARNING! There are options you set that were not used!
WARNING! could be spelling mistake, etc!
There are 2 unused database options. They are:
Option left: name:-nls_solve_mat_mumps_icntl_24 value: 1 source: code
Option left: name:-nls_solve_mat_mumps_icntl_25 value: 0 source: code

Could you start by checking that the problem is solved with the nullspace supplied using a linear/direct solver?

To remove the nullspace of b you need to actually modify the newton solver (how the rhs is assembled). It is not sufficient to call

1 Like

I can try in 10 hours. In the meantime, should I also assemble(solver.A) before setting the nullspace?

Also, am I not already using a direct solver? I use ‘‘preonly’’ with ‘lu’ and mumps.

You are using a Direct solver inside the NewtonSolver, which obfuscates things a bit.
I would like to take a step back and ensure that you can get a Linear Solver (with no Newton method) to converge/work for your given problem, by applying the appropriate null-space.

That shouldn’t be needed. You can test if it is a null-space of the assemble matrix after solve by doing something similar to: dolfinx/python/demo/demo_stokes.py at 3478f57f6ea41460c1ec6933a6fb0d33ef844e70 · FEniCS/dolfinx · GitHub

1 Like

Ok about your first comment, I,will try tonight and let you know.

About the assert statement, I think that’s precisely what I have done in post 16. I pasted the traceback that indicates a problem about A being unassembled, hence my following question.