How to adaptively refine or coarsen the mesh?

cen
Hello, I want to solve a Poisson-Nernst-Planck (PNP) equation, its steady state solution is something like the figure above shows. The left boundary condition is the flux of chemical species. The concentration will change sharply near the left boundary. I have the following questions:
(1) Initially, I want the mesh grid more dense near the left boundary. Suppose I have a 1d-array

mesh = np.hstack((np.linspace(0, 1.0e-5, 20), np.linspace(1.1e-5, 2.0e-4, 20)))

how to convert it to a dolfinx mesh ?
(2) When I am solving the PNP equation, the solution near the left boundary may become negative suddenly. That’s a sign that I should refine the mesh. I know dolfinx.mesh.refine can refine the mesh, but I also want to coarsen the mesh in the region where the solution is flat. How to do that ?
(3) When I’m solving the equation to drive it to the steay state, the flux will change versus the time.

F = flux * v * ufl.ds + (something else)

How can I change the flux in the variational form ?
(4) I find that PETSC has a “TS” module which implements many ODE and DAE solver. Can we use it to solve the time-dependent problems ?
Can anyone give some examples related to these questions? Thanks a lot.

See for instance
http://jsdokken.com/FEniCS23-tutorial/src/mesh_generation.html

Coarsening is quite hard to get to work (especially for all cell types in parallel). I know Chris Richardson put some effort into this some years ago, but I dont know the state of it as of today.

Since you are solving 1D equations, I cannot imagine the dof-count becoming do large that coarsening is worth the effort.

Make flux a function, the interpolate the new flux at every time step. See for instance
http://jsdokken.com/FEniCS23-tutorial/src/form_compilation.html

OK, Thanks for your reply, I will try these examples. :handshake:

Hello, dokken. When I tried to create the mesh, I met the following problem.
(1) If I don’t use mpirun, the solution is correct. There is no error.
(2) If I use mpirun, the solution is wrong. In the script, I stack two intervals

xmesh = np.hstack((np.linspace(0, 0.5, 16), np.linspace(0.52, L, 16)))

If I print the grid points, I find two 0.5 and 0.52

[...0.4        0.43333333 0.46666667 0.5        0.5        0.52
0.52       0.61866667 0.71733333 0.816      0.91466667 1.0133333 ...]

and the solution is not the same at the two 0.5 and 0.52 points. Also when I use mpirun, I get a error at the end of the script.

munmap_chunk(): invalid pointer

(3) If I use MPI.COMM_SELF in the scipt, I get some errors like

mpirun -np 2 python3 test2.py 
HDF5-DIAG: Error detected in HDF5 (1.14.3) MPI-process 1:
  #000: H5F.c line 660 in H5Fcreate(): unable to synchronously create file
    major: File accessibility
    minor: Unable to create file
  #001: H5F.c line 614 in H5F__create_api_common(): unable to create file
    major: File accessibility
    minor: Unable to open file

Here is my complete script

# -*- coding:utf-8 -*-
import ufl 
import dolfinx
from dolfinx.fem.petsc import NonlinearProblem
from dolfinx.nls.petsc import NewtonSolver
from petsc4py import PETSc
from mpi4py import MPI
import numpy as np 

t_ini = 0
t_fin = 2.0
n_step = 20
dt = (t_fin - t_ini) / n_step
nx = 32
L = 2.0

rank = MPI.COMM_WORLD.Get_rank()
xmesh = np.hstack((np.linspace(0, 0.5, 16), np.linspace(0.52, L, 16)))
xmesh = np.reshape(xmesh, (len(xmesh), 1))
connectivity = np.array([[i, i+1] for i in range(len(xmesh)-1)], dtype=np.int64)
c_el = ufl.Mesh(ufl.VectorElement("Lagrange", ufl.interval, 1))
domain = dolfinx.mesh.create_mesh(MPI.COMM_WORLD, connectivity, xmesh, c_el)
#domain = dolfinx.mesh.create_interval(MPI.COMM_WORLD, nx, points=(0.0, L))

V = dolfinx.fem.FunctionSpace(domain, ("Lagrange", 1))
fdim = domain.topology.dim - 1
boundary_facets = dolfinx.mesh.locate_entities_boundary(domain, fdim, lambda x: np.isclose(L, x[0]))
bc = dolfinx.fem.dirichletbc(PETSc.ScalarType(1.0), dolfinx.fem.locate_dofs_topological(V, fdim, boundary_facets), V)

u_n = dolfinx.fem.Function(V)
u_n.interpolate(lambda x: np.ones(x[0].shape))
u = dolfinx.fem.Function(V)
v = ufl.TestFunction(V)
a = u*v*ufl.dx + dt*ufl.dot(ufl.grad(u), ufl.grad(v))*ufl.dx + dt*ufl.sin(u)*v*ufl.ds
L = u_n*v*ufl.dx
F = a - L 

problem = NonlinearProblem(F, u, bcs=[bc])
solver = NewtonSolver(MPI.COMM_WORLD, problem)
solver.convergence_criterion = "incremental"
solver.rtol = 1e-6

ksp = solver.krylov_solver
opts = PETSc.Options()
option_prefix = ksp.getOptionsPrefix()
opts[f"{option_prefix}ksp_type"] = "cg"
opts[f"{option_prefix}pc_type"] = "gamg"
opts[f"{option_prefix}pc_factor_mat_solver_type"] = "mumps"
ksp.setFromOptions()
xdmf = dolfinx.io.XDMFFile(domain.comm, "diffusion.xdmf", "w")
xdmf.write_mesh(domain)

t = t_ini 
while t < t_fin:
    n, converged = solver.solve(u)
    if (rank == 0):
        print(f"Number of interations: {n:d}")
    u_n.x.array[:] = u.x.array
    t += dt
    xdmf.write_function(u, t)
xdmf.close()

And I use the following script to plot the solution:

import h5py
import numpy as np 
import matplotlib.pyplot as plt

f = h5py.File("diffusion.h5")
x = np.array(f['Mesh']['mesh']['geometry'])
x = x[:, 0]
order = x.argsort()
x = x[order]

for t_key in f['Function']['f'].keys():
    t = float(t_key)
    u = np.array(f['Function']['f'][t_key][:, 0])
    plt.plot(x, u[order], color="red")
plt.show()
f.close()

The solution become this if I run the script by mpirun -np 2 python3 script.py
aa

See: Mesh creation in serial and parallel — FEniCSx Documentation on how to create MPI compatible meshes.

This line looks strange, I guess it’s causing the problems. As Jack said, 1-D should be “easy”, no special topology needed.

You could try creating a standard mesh first (UnitInterval or Interval), then just replace the values set in domain.geometry.x.

What is happening here is that you send in the same points on each process, and DOLFINx will try to figure out if they are the same or not. Thus, I would send in points only on a single process, as done in

1. Send all points and cells on one process

from Mesh creation in serial and parallel — FEniCSx Documentation

1 Like

Here is a working code:

rank = MPI.COMM_WORLD.Get_rank()
if rank == 0:
    xmesh = np.hstack((np.linspace(0, 0.2, 32), np.linspace(0.20625, L, 33)), dtype=np.float64)
    xmesh = np.reshape(xmesh, (len(xmesh), 1))
    connectivity = np.array([[i, i+1] for i in range(len(xmesh)-1)], dtype=np.int64)
else:
    xmesh = np.empty((0, 1), dtype=np.float64)
    connectivity = np.empty((0, 2), dtype=np.int64)

c_el = ufl.Mesh(ufl.VectorElement("Lagrange", ufl.interval, 1))
shared_partitioner = dolfinx.mesh.create_cell_partitioner(dolfinx.mesh.GhostMode.shared_facet)
domain = dolfinx.mesh.create_mesh(MPI.COMM_WORLD, connectivity, xmesh, c_el, partitioner=shared_partitioner)

shared_partitioner is required, because the adjacent vertex on different process should “know” each other.