When you initialize a mesh with MPI.COMM_SELF
you create a mesh (with all cells on each process).
This means that the mesh is not partitioned and you solve a problem for each mesh on each process.
If you use MPI.COMM_WORLD
the mesh gets paritioned (one mesh distributed over multiple processes).
As to the execution time, I think you mean “why is solving solver than projecting?”.
Simply change your solve to solving inner(u, v)*dx
instead of inner(grad(u), grad(v))*dx
. What you will then realize, is that your solve function includes a projection (which is solving inner(u,v)*dx=inner(expr,v)
).
If your change your solving function to:
def test_solving_time(mesh: fenics.Mesh):
# define problem
V = fenics.FunctionSpace(mesh, "CG", 1)
expr = fenics.Expression("x[0] < 0.5 ? 1. : 0.", degree=1)
u = fenics.TrialFunction(V)
v = fenics.TestFunction(V)
a = fenics.dot(u, v) * fenics.dx
L = expr * v * fenics.dx
u = fenics.Function(V)
# solve
fenics.solve(a == L, u, solver_parameters=solver_parameters)
you should get similar timings as projection.
If you use:
a = fenics.dot(fenics.grad(u), fenics.grad(v)) * fenics.dx
L = expr * v * fenics.dx
u = fenics.Function(V)
in your solve function it will be marginally slower than projecting expr
which has to do with the efficiency of the solver and preconditioner you are using (on a stiffness matrix and not a mass matrix).