I have made a dummy example where i am solving for the electric potential in the 1m^3 domain enclosed by two 1m^2 capacitors at x=+/-0.5 held at V=+/-1V respectively. The code:
from mshr import *
from fenics import *
import numpy as np
import matplotlib.pyplot as plt
from mpi4py import MPI
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
def boundary(x):
tolerance = 1E-12
# Check if the x coordinate is -0.5 or 0.5
if np.abs(x[0]-0.5)<tolerance or np.abs(x[0]+ 0.5)<tolerance:
return 1
else:
return 0
def Pot(x):
if x<0:
return -1
if x>0:
return 1
domain = Box(Point(-0.5,-0.5,-0.5), Point(0.5,0.5,0.5))
mesh = generate_mesh(domain,20)
V = FunctionSpace(mesh, 'Lagrange', 1)
u = TrialFunction(V)
v = TestFunction(V)
class MyExpression0(UserExpression):
def eval(self, value, x):
value[0] = Pot(x[0])
u_d = MyExpression0()
#defined the boundary condition
bc = DirichletBC(V, u_d, boundary)
f = Constant(0)
a = dot(grad(u), grad(v))*dx
L = f*v*dx
u = Function(V)
solve(a == L, u, bc)
runs fine using python without parallel processing. However, when I run mpirun -n python Capacitor.py , each of the N processes starts correctly, but each one fails with
*** Error: Unable to evaluate function at point.
*** Reason: The point is not inside the domain. Consider calling “Function::set_allow_extrapolation(true)” on this Function to allow extrapolation.
Hey Dokken,
thanks for your reply. Here is the version of fenics im using:
fenics-basix 0.6.0
fenics-dijitso 2019.2.0.dev0
fenics-dolfin 2019.2.0.dev0
fenics-ffc 2019.2.0.dev0
fenics-fiat 2019.2.0.dev0
fenics-ufl-legacy 2022.3.0
and this is the version of mpi4py:
mpi4py 3.1.3
note that I’m running from ubuntu terminal not docker also.
As you haven’t specified what system you are running on or the commands you used to install dolfin, I cant reproduce your error and cannot get to the bottom of this
I’m running on ubuntu 22.04.2 via WSL and I (am 95% sure that) installed FeNics via the ubuntu installation process listed here https://fenicsproject.org/download/archive/.
Hey Dokken, sorry for the confusion, the error I was receiving was due to point evaluation after. However this script doesn’t run any faster than without mpiexec, in fact it just repeats the same calculation n times. Is it obvious to you what need to change to actually allow parallel computation?